I1014 13:34:55.455831 11 test_context.go:429] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I1014 13:34:55.462211 11 e2e.go:129] Starting e2e run "ecaf12d6-4eab-40fd-a91d-dd048d482243" on Ginkgo node 1 {"msg":"Test Suite starting","total":303,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1602682478 - Will randomize all specs Will run 303 of 5232 specs Oct 14 13:34:56.114: INFO: >>> kubeConfig: /root/.kube/config Oct 14 13:34:56.161: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Oct 14 13:34:56.378: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Oct 14 13:34:56.560: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Oct 14 13:34:56.560: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Oct 14 13:34:56.560: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Oct 14 13:34:56.607: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Oct 14 13:34:56.607: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Oct 14 13:34:56.607: INFO: e2e test version: v1.19.3-rc.0 Oct 14 13:34:56.611: INFO: kube-apiserver version: v1.19.0 Oct 14 13:34:56.612: INFO: >>> kubeConfig: /root/.kube/config Oct 14 13:34:56.637: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:34:56.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir Oct 14 13:34:56.754: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs Oct 14 13:34:56.785: INFO: Waiting up to 5m0s for pod "pod-f688d880-d40b-4121-8b9e-f5b5e3669f42" in namespace "emptydir-4847" to be "Succeeded or Failed" Oct 14 13:34:56.798: INFO: Pod "pod-f688d880-d40b-4121-8b9e-f5b5e3669f42": Phase="Pending", Reason="", readiness=false. Elapsed: 12.864797ms Oct 14 13:34:58.811: INFO: Pod "pod-f688d880-d40b-4121-8b9e-f5b5e3669f42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025862371s Oct 14 13:35:00.821: INFO: Pod "pod-f688d880-d40b-4121-8b9e-f5b5e3669f42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035428141s STEP: Saw pod success Oct 14 13:35:00.821: INFO: Pod "pod-f688d880-d40b-4121-8b9e-f5b5e3669f42" satisfied condition "Succeeded or Failed" Oct 14 13:35:00.827: INFO: Trying to get logs from node latest-worker pod pod-f688d880-d40b-4121-8b9e-f5b5e3669f42 container test-container: STEP: delete the pod Oct 14 13:35:01.086: INFO: Waiting for pod pod-f688d880-d40b-4121-8b9e-f5b5e3669f42 to disappear Oct 14 13:35:01.186: INFO: Pod pod-f688d880-d40b-4121-8b9e-f5b5e3669f42 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:35:01.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4847" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":1,"skipped":49,"failed":0} ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:35:01.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-caa84dad-d4bc-4345-8e0f-dcee03dabe04 STEP: Creating a pod to test consume configMaps Oct 14 13:35:01.354: INFO: Waiting up to 5m0s for pod "pod-configmaps-e480e93f-45bd-4dfe-87f9-67eb9836b3ea" in namespace "configmap-7530" to be "Succeeded or Failed" Oct 14 13:35:01.383: INFO: Pod "pod-configmaps-e480e93f-45bd-4dfe-87f9-67eb9836b3ea": Phase="Pending", Reason="", readiness=false. Elapsed: 29.071045ms Oct 14 13:35:03.391: INFO: Pod "pod-configmaps-e480e93f-45bd-4dfe-87f9-67eb9836b3ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037013075s Oct 14 13:35:05.399: INFO: Pod "pod-configmaps-e480e93f-45bd-4dfe-87f9-67eb9836b3ea": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045212222s Oct 14 13:35:07.409: INFO: Pod "pod-configmaps-e480e93f-45bd-4dfe-87f9-67eb9836b3ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.054985505s STEP: Saw pod success Oct 14 13:35:07.409: INFO: Pod "pod-configmaps-e480e93f-45bd-4dfe-87f9-67eb9836b3ea" satisfied condition "Succeeded or Failed" Oct 14 13:35:07.415: INFO: Trying to get logs from node latest-worker pod pod-configmaps-e480e93f-45bd-4dfe-87f9-67eb9836b3ea container configmap-volume-test: STEP: delete the pod Oct 14 13:35:07.441: INFO: Waiting for pod pod-configmaps-e480e93f-45bd-4dfe-87f9-67eb9836b3ea to disappear Oct 14 13:35:07.475: INFO: Pod pod-configmaps-e480e93f-45bd-4dfe-87f9-67eb9836b3ea no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:35:07.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7530" for this suite. • [SLOW TEST:6.282 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":303,"completed":2,"skipped":49,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:35:07.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should adopt matching pods on creation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:35:12.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9419" for this suite. • [SLOW TEST:5.252 seconds] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":303,"completed":3,"skipped":111,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:35:12.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-8705, will wait for the garbage collector to delete the pods Oct 14 13:35:19.013: INFO: Deleting Job.batch foo took: 10.052649ms Oct 14 13:35:19.116: INFO: Terminating Job.batch foo pods took: 102.991725ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:35:55.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-8705" for this suite. • [SLOW TEST:42.993 seconds] [sig-apps] Job /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":303,"completed":4,"skipped":125,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:35:55.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-5332 STEP: creating service affinity-clusterip-transition in namespace services-5332 STEP: creating replication controller affinity-clusterip-transition in namespace services-5332 I1014 13:35:55.942493 11 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-5332, replica count: 3 I1014 13:35:58.995031 11 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1014 13:36:01.996539 11 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1014 13:36:04.997614 11 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 14 13:36:05.008: INFO: Creating new exec pod Oct 14 13:36:10.069: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-5332 execpod-affinitytfp7r -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Oct 14 13:36:14.365: INFO: stderr: "I1014 13:36:14.253757 33 log.go:181] (0x273e460) (0x273e9a0) Create stream\nI1014 13:36:14.255520 33 log.go:181] (0x273e460) (0x273e9a0) Stream added, broadcasting: 1\nI1014 13:36:14.263735 33 log.go:181] (0x273e460) Reply frame received for 1\nI1014 13:36:14.264348 33 log.go:181] (0x273e460) (0x273ff10) Create stream\nI1014 13:36:14.264453 33 log.go:181] (0x273e460) (0x273ff10) Stream added, broadcasting: 3\nI1014 13:36:14.266090 33 log.go:181] (0x273e460) Reply frame received for 3\nI1014 13:36:14.266381 33 log.go:181] (0x273e460) (0x2694070) Create stream\nI1014 13:36:14.266466 33 log.go:181] (0x273e460) (0x2694070) Stream added, broadcasting: 5\nI1014 13:36:14.267652 33 log.go:181] (0x273e460) Reply frame received for 5\nI1014 13:36:14.332007 33 log.go:181] (0x273e460) Data frame received for 3\nI1014 13:36:14.332289 33 log.go:181] (0x273ff10) (3) Data frame handling\nI1014 13:36:14.332589 33 log.go:181] (0x273e460) Data frame received for 5\nI1014 13:36:14.332823 33 log.go:181] (0x2694070) (5) Data frame handling\nI1014 13:36:14.345173 33 log.go:181] (0x273e460) Data frame received for 1\nI1014 13:36:14.345687 33 log.go:181] (0x273e9a0) (1) Data frame handling\nI1014 13:36:14.345991 33 log.go:181] (0x273e9a0) (1) Data frame sent\nI1014 13:36:14.346348 33 log.go:181] (0x273e460) (0x273e9a0) Stream removed, broadcasting: 1\nI1014 13:36:14.348611 33 log.go:181] (0x2694070) (5) Data frame sent\nI1014 13:36:14.348700 33 log.go:181] (0x273e460) Data frame received for 5\nI1014 13:36:14.348779 33 log.go:181] (0x2694070) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\nI1014 13:36:14.354664 33 log.go:181] (0x2694070) (5) Data frame sent\nI1014 13:36:14.354795 33 log.go:181] (0x273e460) Data frame received for 5\nI1014 13:36:14.354865 33 log.go:181] (0x2694070) (5) Data frame handling\nI1014 13:36:14.355059 33 log.go:181] (0x273e460) Go away received\nI1014 13:36:14.356331 33 log.go:181] (0x273e460) (0x273e9a0) Stream removed, broadcasting: 1\nI1014 13:36:14.356793 33 log.go:181] (0x273e460) (0x273ff10) Stream removed, broadcasting: 3\nI1014 13:36:14.357034 33 log.go:181] (0x273e460) (0x2694070) Stream removed, broadcasting: 5\n" Oct 14 13:36:14.366: INFO: stdout: "" Oct 14 13:36:14.374: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-5332 execpod-affinitytfp7r -- /bin/sh -x -c nc -zv -t -w 2 10.102.183.114 80' Oct 14 13:36:15.837: INFO: stderr: "I1014 13:36:15.725560 53 log.go:181] (0x2a52000) (0x2a52070) Create stream\nI1014 13:36:15.728893 53 log.go:181] (0x2a52000) (0x2a52070) Stream added, broadcasting: 1\nI1014 13:36:15.739235 53 log.go:181] (0x2a52000) Reply frame received for 1\nI1014 13:36:15.740015 53 log.go:181] (0x2a52000) (0x2f96070) Create stream\nI1014 13:36:15.740118 53 log.go:181] (0x2a52000) (0x2f96070) Stream added, broadcasting: 3\nI1014 13:36:15.741817 53 log.go:181] (0x2a52000) Reply frame received for 3\nI1014 13:36:15.742174 53 log.go:181] (0x2a52000) (0x2f962a0) Create stream\nI1014 13:36:15.742261 53 log.go:181] (0x2a52000) (0x2f962a0) Stream added, broadcasting: 5\nI1014 13:36:15.743580 53 log.go:181] (0x2a52000) Reply frame received for 5\nI1014 13:36:15.821987 53 log.go:181] (0x2a52000) Data frame received for 3\nI1014 13:36:15.822281 53 log.go:181] (0x2f96070) (3) Data frame handling\nI1014 13:36:15.822564 53 log.go:181] (0x2a52000) Data frame received for 5\nI1014 13:36:15.822717 53 log.go:181] (0x2f962a0) (5) Data frame handling\nI1014 13:36:15.823363 53 log.go:181] (0x2f962a0) (5) Data frame sent\nI1014 13:36:15.823585 53 log.go:181] (0x2a52000) Data frame received for 1\nI1014 13:36:15.823743 53 log.go:181] (0x2a52070) (1) Data frame handling\n+ nc -zv -t -w 2 10.102.183.114 80\nConnection to 10.102.183.114 80 port [tcp/http] succeeded!\nI1014 13:36:15.823915 53 log.go:181] (0x2a52070) (1) Data frame sent\nI1014 13:36:15.824349 53 log.go:181] (0x2a52000) Data frame received for 5\nI1014 13:36:15.824470 53 log.go:181] (0x2f962a0) (5) Data frame handling\nI1014 13:36:15.825241 53 log.go:181] (0x2a52000) (0x2a52070) Stream removed, broadcasting: 1\nI1014 13:36:15.827155 53 log.go:181] (0x2a52000) Go away received\nI1014 13:36:15.829451 53 log.go:181] (0x2a52000) (0x2a52070) Stream removed, broadcasting: 1\nI1014 13:36:15.829857 53 log.go:181] (0x2a52000) (0x2f96070) Stream removed, broadcasting: 3\nI1014 13:36:15.830095 53 log.go:181] (0x2a52000) (0x2f962a0) Stream removed, broadcasting: 5\n" Oct 14 13:36:15.837: INFO: stdout: "" Oct 14 13:36:15.868: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-5332 execpod-affinitytfp7r -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.102.183.114:80/ ; done' Oct 14 13:36:17.603: INFO: stderr: "I1014 13:36:17.374317 73 log.go:181] (0x2758bd0) (0x2758c40) Create stream\nI1014 13:36:17.376229 73 log.go:181] (0x2758bd0) (0x2758c40) Stream added, broadcasting: 1\nI1014 13:36:17.385576 73 log.go:181] (0x2758bd0) Reply frame received for 1\nI1014 13:36:17.386381 73 log.go:181] (0x2758bd0) (0x2f1e700) Create stream\nI1014 13:36:17.386472 73 log.go:181] (0x2758bd0) (0x2f1e700) Stream added, broadcasting: 3\nI1014 13:36:17.388630 73 log.go:181] (0x2758bd0) Reply frame received for 3\nI1014 13:36:17.389167 73 log.go:181] (0x2758bd0) (0x2d249a0) Create stream\nI1014 13:36:17.389288 73 log.go:181] (0x2758bd0) (0x2d249a0) Stream added, broadcasting: 5\nI1014 13:36:17.391513 73 log.go:181] (0x2758bd0) Reply frame received for 5\nI1014 13:36:17.443765 73 log.go:181] (0x2758bd0) Data frame received for 3\nI1014 13:36:17.443959 73 log.go:181] (0x2f1e700) (3) Data frame handling\nI1014 13:36:17.444051 73 log.go:181] (0x2758bd0) Data frame received for 5\nI1014 13:36:17.444195 73 log.go:181] (0x2d249a0) (5) Data frame handling\nI1014 13:36:17.444301 73 log.go:181] (0x2f1e700) (3) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.183.114:80/\nI1014 13:36:17.444569 73 log.go:181] (0x2d249a0) (5) Data frame sent\nI1014 13:36:17.449811 73 log.go:181] (0x2758bd0) Data frame received for 3\nI1014 13:36:17.449875 73 log.go:181] (0x2f1e700) (3) Data frame handling\nI1014 13:36:17.449942 73 log.go:181] (0x2f1e700) (3) Data frame sent\nI1014 13:36:17.450683 73 log.go:181] (0x2758bd0) Data frame received for 5\nI1014 13:36:17.450803 73 log.go:181] (0x2d249a0) (5) Data frame handling\n+ echo\n+ curlI1014 13:36:17.450888 73 log.go:181] (0x2758bd0) Data frame received for 3\nI1014 13:36:17.451011 73 log.go:181] (0x2f1e700) (3) Data frame handling\nI1014 13:36:17.451088 73 log.go:181] (0x2d249a0) (5) Data frame sent\nI1014 13:36:17.451191 73 log.go:181] (0x2758bd0) Data frame received for 5\nI1014 13:36:17.451281 73 log.go:181] (0x2f1e700) (3) Data frame sent\nI1014 13:36:17.451392 73 log.go:181] (0x2d249a0) (5) Data frame handling\nI1014 13:36:17.451507 73 log.go:181] (0x2d249a0) (5) Data frame sent\n -q -s --connect-timeout 2 http://10.102.183.114:80/\nI1014 13:36:17.454044 73 log.go:181] (0x2758bd0) Data frame received for 3\nI1014 13:36:17.454193 73 log.go:181] (0x2f1e700) (3) Data frame handling\nI1014 13:36:17.454305 73 log.go:181] (0x2f1e700) (3) Data frame sent\nI1014 13:36:17.454661 73 log.go:181] (0x2758bd0) Data frame received for 5\nI1014 13:36:17.454736 73 log.go:181] (0x2d249a0) (5) Data frame handling\nI1014 13:36:17.454817 73 log.go:181] (0x2d249a0) (5) Data frame sent\nI1014 13:36:17.454880 73 log.go:181] (0x2758bd0) Data frame received for 5\nI1014 13:36:17.454938 73 log.go:181] (0x2d249a0) (5) Data frame handling\n+ echo\nI1014 13:36:17.455119 73 log.go:181] (0x2758bd0) Data frame received for 3\nI1014 13:36:17.455252 73 log.go:181] (0x2f1e700) (3) Data frame handling\nI1014 13:36:17.455375 73 log.go:181] (0x2f1e700) (3) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.102.183.114:80/\nI1014 13:36:17.455597 73 log.go:181] (0x2d249a0) (5) Data frame sent\nI1014 13:36:17.459829 73 log.go:181] (0x2758bd0) Data frame received for 3\nI1014 13:36:17.459963 73 log.go:181] (0x2f1e700) (3) Data frame handling\nI1014 13:36:17.460087 73 log.go:181] (0x2f1e700) (3) Data frame sent\nI1014 13:36:17.460288 73 log.go:181] (0x2758bd0) Data frame received for 5\nI1014 13:36:17.460362 73 log.go:181] (0x2d249a0) (5) Data frame handling\nI1014 13:36:17.460430 73 log.go:181] (0x2d249a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2I1014 13:36:17.460496 73 log.go:181] (0x2758bd0) Data frame received for 5\nI1014 13:36:17.460644 73 log.go:181] (0x2d249a0) (5) Data frame handling\n http://10.102.183.114:80/\nI1014 13:36:17.460752 73 log.go:181] (0x2758bd0) Data frame received for 3\nI1014 13:36:17.461032 73 log.go:181] (0x2f1e700) (3) Data frame handling\nI1014 13:36:17.461156 73 log.go:181] (0x2d249a0) (5) Data frame sent\nI1014 13:36:17.461253 73 log.go:181] (0x2f1e700) (3) Data frame sent\nI1014 13:36:17.465406 73 log.go:181] (0x2758bd0) Data frame received for 3\nI1014 13:36:17.465545 73 log.go:181] (0x2f1e700) (3) Data frame handling\nI1014 13:36:17.465673 73 log.go:181] (0x2f1e700) (3) Data frame sent\nI1014 13:36:17.466171 73 log.go:181] (0x2758bd0) Data frame received for 3\nI1014 13:36:17.466315 73 log.go:181] (0x2f1e700) (3) Data frame handling\nI1014 13:36:17.466418 73 log.go:181] (0x2758bd0) Data frame received for 5\nI1014 13:36:17.466536 73 log.go:181] (0x2d249a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.183.114:80/\nI1014 13:36:17.466628 73 log.go:181] (0x2f1e700) (3) Data frame sent\nI1014 13:36:17.466762 73 log.go:181] (0x2d249a0) (5) Data frame sent\nI1014 13:36:17.474129 73 log.go:181] (0x2758bd0) Data frame received for 3\nI1014 13:36:17.474236 73 log.go:181] (0x2f1e700) (3) Data frame handling\nI1014 13:36:17.474325 73 log.go:181] (0x2f1e700) (3) Data frame sent\nI1014 13:36:17.474558 73 log.go:181] (0x2758bd0) Data frame received for 5\nI1014 13:36:17.474710 73 log.go:181] (0x2d249a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.183.114:80/\nI1014 13:36:17.474816 73 log.go:181] (0x2758bd0) Data frame received for 3\nI1014 13:36:17.474924 73 log.go:181] (0x2f1e700) (3) Data frame handling\nI1014 13:36:17.475055 73 log.go:181] (0x2d249a0) (5) Data frame sent\nI1014 13:36:17.475193 73 log.go:181] (0x2f1e700) (3) Data frame sent\nI1014 13:36:17.481561 73 log.go:181] (0x2758bd0) Data frame received for 3\nI1014 13:36:17.481651 73 log.go:181] (0x2f1e700) (3) Data frame handling\nI1014 13:36:17.481815 73 log.go:181] (0x2f1e700) (3) Data frame sent\nI1014 13:36:17.485685 73 log.go:181] (0x2758bd0) Data frame received for 5\nI1014 13:36:17.485785 73 log.go:181] (0x2d249a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.183.114:80/\nI1014 13:36:17.485894 73 log.go:181] (0x2758bd0) Data frame received for 3\nI1014 13:36:17.486032 73 log.go:181] (0x2f1e700) (3) Data frame handling\nI1014 13:36:17.486106 73 log.go:181] (0x2d249a0) (5) Data frame sent\nI1014 13:36:17.486191 73 log.go:181] (0x2f1e700) (3) Data frame sent\nI1014 13:36:17.487422 73 log.go:181] (0x2758bd0) Data frame received for 3\nI1014 13:36:17.487542 73 log.go:181] (0x2f1e700) (3) Data frame handling\nI1014 13:36:17.487700 73 log.go:181] (0x2f1e700) (3) Data frame sent\nI1014 13:36:17.488446 73 log.go:181] (0x2758bd0) Data frame received for 5\nI1014 13:36:17.488587 73 log.go:181] (0x2d249a0) (5) Data frame handling\nI1014 13:36:17.488709 73 log.go:181] (0x2d249a0) (5) Data frame sent\n+ echo\nI1014 13:36:17.488967 73 log.go:181] (0x2758bd0) Data frame received for 5\nI1014 13:36:17.489104 73 log.go:181] (0x2d249a0) (5) Data frame handling\n+ curl -q -s --connect-timeout 2 http://10.102.183.114:80/\nI1014 13:36:17.489250 73 log.go:181] (0x2758bd0) Data frame received for 3\nI1014 13:36:17.489368 73 log.go:181] (0x2f1e700) (3) Data frame handling\nI1014 13:36:17.489461 73 log.go:181] (0x2d249a0) (5) Data frame sent\nI1014 13:36:17.489593 73 log.go:181] (0x2f1e700) (3) Data frame sent\nI1014 13:36:17.494388 73 log.go:181] (0x2758bd0) Data frame received for 3\nI1014 13:36:17.494506 73 log.go:181] (0x2f1e700) (3) Data frame handling\nI1014 13:36:17.494612 73 log.go:181] (0x2f1e700) (3) Data frame sent\nI1014 13:36:17.495260 73 log.go:181] (0x2758bd0) Data frame received for 3\nI1014 13:36:17.495378 73 log.go:181] (0x2f1e700) (3) Data frame handling\nI1014 13:36:17.495464 73 log.go:181] (0x2758bd0) Data frame received for 5\nI1014 13:36:17.495611 73 log.go:181] (0x2d249a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.183.114:80/\nI1014 13:36:17.495700 73 log.go:181] (0x2f1e700) (3) Data frame sent\nI1014 13:36:17.496048 73 log.go:181] (0x2d249a0) (5) Data frame sent\nI1014 13:36:17.501039 73 log.go:181] (0x2758bd0) Data frame received for 3\nI1014 13:36:17.501146 73 log.go:181] (0x2f1e700) (3) Data frame handling\nI1014 13:36:17.501258 73 log.go:181] (0x2f1e700) (3) Data frame sent\nI1014 13:36:17.501349 73 log.go:181] (0x2758bd0) Data frame received for 3\nI1014 13:36:17.501414 73 log.go:181] (0x2758bd0) Data frame received for 5\nI1014 13:36:17.501487 73 log.go:181] (0x2d249a0) (5) Data frame handling\nI1014 13:36:17.501552 73 log.go:181] (0x2d249a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.183.114:80/\nI1014 13:36:17.501618 73 log.go:181] (0x2f1e700) (3) Data frame handling\nI1014 13:36:17.501683 73 log.go:181] (0x2f1e700) (3) Data frame sent\nI1014 13:36:17.507934 73 log.go:181] (0x2758bd0) Data frame received for 3\nI1014 13:36:17.508088 73 log.go:181] (0x2f1e700) (3) Data frame handling\nI1014 13:36:17.508318 73 log.go:181] (0x2f1e700) (3) Data frame sent\nI1014 13:36:17.508546 73 log.go:181] (0x2758bd0) Data frame received for 5\nI1014 13:36:17.508656 73 log.go:181] (0x2d249a0) (5) Data frame handling\nI1014 13:36:17.508756 73 log.go:181] (0x2d249a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.183.114:80/\nI1014 13:36:17.508930 73 log.go:181] (0x2758bd0) Data frame received for 3\nI1014 13:36:17.509025 73 log.go:181] (0x2f1e700) (3) Data frame handling\nI1014 13:36:17.509154 73 log.go:181] (0x2f1e700) (3) Data frame sent\nI1014 13:36:17.513404 73 log.go:181] (0x2758bd0) Data frame received for 3\nI1014 13:36:17.513535 73 log.go:181] (0x2f1e700) (3) Data frame handling\nI1014 13:36:17.513693 73 log.go:181] (0x2f1e700) (3) Data frame sent\nI1014 13:36:17.514191 73 log.go:181] (0x2758bd0) Data frame received for 5\nI1014 13:36:17.514285 73 log.go:181] (0x2758bd0) Data frame received for 3\nI1014 13:36:17.514416 73 log.go:181] (0x2f1e700) (3) Data frame handling\nI1014 13:36:17.514589 73 log.go:181] (0x2d249a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.183.114:80/\nI1014 13:36:17.514792 73 log.go:181] (0x2f1e700) (3) Data frame sent\nI1014 13:36:17.514901 73 log.go:181] (0x2d249a0) (5) Data frame sent\nI1014 13:36:17.520563 73 log.go:181] (0x2758bd0) Data frame received for 3\nI1014 13:36:17.520675 73 log.go:181] (0x2f1e700) (3) Data frame handling\nI1014 13:36:17.520934 73 log.go:181] (0x2f1e700) (3) Data frame sent\nI1014 13:36:17.521295 73 log.go:181] (0x2758bd0) Data frame received for 3\nI1014 13:36:17.521398 73 log.go:181] (0x2f1e700) (3) Data frame handling\nI1014 13:36:17.521501 73 log.go:181] (0x2758bd0) Data frame received for 5\nI1014 13:36:17.521634 73 log.go:181] (0x2d249a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.183.114:80/\nI1014 13:36:17.521728 73 log.go:181] (0x2f1e700) (3) Data frame sent\nI1014 13:36:17.521820 73 log.go:181] (0x2d249a0) (5) Data frame sent\nI1014 13:36:17.526635 73 log.go:181] (0x2758bd0) Data frame received for 3\nI1014 13:36:17.526732 73 log.go:181] (0x2f1e700) (3) Data frame handling\nI1014 13:36:17.526833 73 log.go:181] (0x2f1e700) (3) Data frame sent\nI1014 13:36:17.527439 73 log.go:181] (0x2758bd0) Data frame received for 5\nI1014 13:36:17.527532 73 log.go:181] (0x2d249a0) (5) Data frame handling\nI1014 13:36:17.527622 73 log.go:181] (0x2d249a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.183.114:80/\nI1014 13:36:17.527717 73 log.go:181] (0x2758bd0) Data frame received for 3\nI1014 13:36:17.527798 73 log.go:181] (0x2f1e700) (3) Data frame handling\nI1014 13:36:17.527903 73 log.go:181] (0x2f1e700) (3) Data frame sent\nI1014 13:36:17.531203 73 log.go:181] (0x2758bd0) Data frame received for 3\nI1014 13:36:17.531335 73 log.go:181] (0x2f1e700) (3) Data frame handling\nI1014 13:36:17.531492 73 log.go:181] (0x2f1e700) (3) Data frame sent\nI1014 13:36:17.532037 73 log.go:181] (0x2758bd0) Data frame received for 5\nI1014 13:36:17.532128 73 log.go:181] (0x2d249a0) (5) Data frame handling\nI1014 13:36:17.532208 73 log.go:181] (0x2d249a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.183.114:80/\nI1014 13:36:17.532281 73 log.go:181] (0x2758bd0) Data frame received for 3\nI1014 13:36:17.532502 73 log.go:181] (0x2f1e700) (3) Data frame handling\nI1014 13:36:17.532697 73 log.go:181] (0x2f1e700) (3) Data frame sent\nI1014 13:36:17.538574 73 log.go:181] (0x2758bd0) Data frame received for 3\nI1014 13:36:17.538682 73 log.go:181] (0x2f1e700) (3) Data frame handling\nI1014 13:36:17.538785 73 log.go:181] (0x2f1e700) (3) Data frame sent\nI1014 13:36:17.539557 73 log.go:181] (0x2758bd0) Data frame received for 5\nI1014 13:36:17.539658 73 log.go:181] (0x2d249a0) (5) Data frame handling\nI1014 13:36:17.539741 73 log.go:181] (0x2d249a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.183.114:80/\nI1014 13:36:17.539818 73 log.go:181] (0x2758bd0) Data frame received for 3\nI1014 13:36:17.539887 73 log.go:181] (0x2f1e700) (3) Data frame handling\nI1014 13:36:17.539973 73 log.go:181] (0x2f1e700) (3) Data frame sent\nI1014 13:36:17.545030 73 log.go:181] (0x2758bd0) Data frame received for 3\nI1014 13:36:17.545154 73 log.go:181] (0x2f1e700) (3) Data frame handling\nI1014 13:36:17.545311 73 log.go:181] (0x2f1e700) (3) Data frame sent\nI1014 13:36:17.546246 73 log.go:181] (0x2758bd0) Data frame received for 3\nI1014 13:36:17.546413 73 log.go:181] (0x2f1e700) (3) Data frame handling\nI1014 13:36:17.546606 73 log.go:181] (0x2758bd0) Data frame received for 5\nI1014 13:36:17.546773 73 log.go:181] (0x2d249a0) (5) Data frame handling\nI1014 13:36:17.548150 73 log.go:181] (0x2758bd0) Data frame received for 1\nI1014 13:36:17.548255 73 log.go:181] (0x2758c40) (1) Data frame handling\nI1014 13:36:17.548397 73 log.go:181] (0x2758c40) (1) Data frame sent\nI1014 13:36:17.556949 73 log.go:181] (0x2758bd0) (0x2758c40) Stream removed, broadcasting: 1\nI1014 13:36:17.567940 73 log.go:181] (0x2758bd0) Go away received\nI1014 13:36:17.590902 73 log.go:181] (0x2758bd0) (0x2758c40) Stream removed, broadcasting: 1\nI1014 13:36:17.591856 73 log.go:181] (0x2758bd0) (0x2f1e700) Stream removed, broadcasting: 3\nI1014 13:36:17.592057 73 log.go:181] (0x2758bd0) (0x2d249a0) Stream removed, broadcasting: 5\n" Oct 14 13:36:17.608: INFO: stdout: "\naffinity-clusterip-transition-7tp4n\naffinity-clusterip-transition-7tp4n\naffinity-clusterip-transition-pd5wd\naffinity-clusterip-transition-pd5wd\naffinity-clusterip-transition-mfhn6\naffinity-clusterip-transition-pd5wd\naffinity-clusterip-transition-7tp4n\naffinity-clusterip-transition-pd5wd\naffinity-clusterip-transition-pd5wd\naffinity-clusterip-transition-pd5wd\naffinity-clusterip-transition-7tp4n\naffinity-clusterip-transition-pd5wd\naffinity-clusterip-transition-mfhn6\naffinity-clusterip-transition-mfhn6\naffinity-clusterip-transition-mfhn6\naffinity-clusterip-transition-pd5wd" Oct 14 13:36:17.609: INFO: Received response from host: affinity-clusterip-transition-7tp4n Oct 14 13:36:17.609: INFO: Received response from host: affinity-clusterip-transition-7tp4n Oct 14 13:36:17.609: INFO: Received response from host: affinity-clusterip-transition-pd5wd Oct 14 13:36:17.609: INFO: Received response from host: affinity-clusterip-transition-pd5wd Oct 14 13:36:17.609: INFO: Received response from host: affinity-clusterip-transition-mfhn6 Oct 14 13:36:17.609: INFO: Received response from host: affinity-clusterip-transition-pd5wd Oct 14 13:36:17.609: INFO: Received response from host: affinity-clusterip-transition-7tp4n Oct 14 13:36:17.609: INFO: Received response from host: affinity-clusterip-transition-pd5wd Oct 14 13:36:17.609: INFO: Received response from host: affinity-clusterip-transition-pd5wd Oct 14 13:36:17.609: INFO: Received response from host: affinity-clusterip-transition-pd5wd Oct 14 13:36:17.609: INFO: Received response from host: affinity-clusterip-transition-7tp4n Oct 14 13:36:17.609: INFO: Received response from host: affinity-clusterip-transition-pd5wd Oct 14 13:36:17.609: INFO: Received response from host: affinity-clusterip-transition-mfhn6 Oct 14 13:36:17.609: INFO: Received response from host: affinity-clusterip-transition-mfhn6 Oct 14 13:36:17.609: INFO: Received response from host: affinity-clusterip-transition-mfhn6 Oct 14 13:36:17.609: INFO: Received response from host: affinity-clusterip-transition-pd5wd Oct 14 13:36:17.620: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-5332 execpod-affinitytfp7r -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.102.183.114:80/ ; done' Oct 14 13:36:19.313: INFO: stderr: "I1014 13:36:19.094511 94 log.go:181] (0x2badab0) (0x2badb20) Create stream\nI1014 13:36:19.098572 94 log.go:181] (0x2badab0) (0x2badb20) Stream added, broadcasting: 1\nI1014 13:36:19.110871 94 log.go:181] (0x2badab0) Reply frame received for 1\nI1014 13:36:19.111866 94 log.go:181] (0x2badab0) (0x267e690) Create stream\nI1014 13:36:19.111985 94 log.go:181] (0x2badab0) (0x267e690) Stream added, broadcasting: 3\nI1014 13:36:19.113858 94 log.go:181] (0x2badab0) Reply frame received for 3\nI1014 13:36:19.114111 94 log.go:181] (0x2badab0) (0x2badce0) Create stream\nI1014 13:36:19.114166 94 log.go:181] (0x2badab0) (0x2badce0) Stream added, broadcasting: 5\nI1014 13:36:19.115346 94 log.go:181] (0x2badab0) Reply frame received for 5\nI1014 13:36:19.207950 94 log.go:181] (0x2badab0) Data frame received for 5\nI1014 13:36:19.208373 94 log.go:181] (0x2badab0) Data frame received for 3\nI1014 13:36:19.208680 94 log.go:181] (0x267e690) (3) Data frame handling\nI1014 13:36:19.209000 94 log.go:181] (0x2badce0) (5) Data frame handling\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.183.114:80/\nI1014 13:36:19.214963 94 log.go:181] (0x267e690) (3) Data frame sent\nI1014 13:36:19.217395 94 log.go:181] (0x2badce0) (5) Data frame sent\nI1014 13:36:19.217712 94 log.go:181] (0x2badab0) Data frame received for 3\nI1014 13:36:19.217845 94 log.go:181] (0x267e690) (3) Data frame handling\nI1014 13:36:19.217944 94 log.go:181] (0x2badab0) Data frame received for 5\nI1014 13:36:19.218163 94 log.go:181] (0x2badce0) (5) Data frame handling\nI1014 13:36:19.218337 94 log.go:181] (0x2badce0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.183.114:80/\nI1014 13:36:19.224440 94 log.go:181] (0x2badab0) Data frame received for 5\nI1014 13:36:19.224552 94 log.go:181] (0x2badce0) (5) Data frame handling\nI1014 13:36:19.224668 94 log.go:181] (0x2badce0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeoutI1014 13:36:19.225075 94 log.go:181] (0x267e690) (3) Data frame sent\nI1014 13:36:19.226209 94 log.go:181] (0x2badab0) Data frame received for 5\nI1014 13:36:19.226441 94 log.go:181] (0x2badce0) (5) Data frame handling\n 2 http://10.102.183.114:80/\nI1014 13:36:19.226564 94 log.go:181] (0x2badab0) Data frame received for 3\nI1014 13:36:19.226690 94 log.go:181] (0x267e690) (3) Data frame handling\nI1014 13:36:19.226770 94 log.go:181] (0x2badce0) (5) Data frame sent\nI1014 13:36:19.226880 94 log.go:181] (0x2badab0) Data frame received for 5\nI1014 13:36:19.226984 94 log.go:181] (0x2badce0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.183.114:80/\nI1014 13:36:19.227071 94 log.go:181] (0x267e690) (3) Data frame sent\nI1014 13:36:19.227223 94 log.go:181] (0x2badab0) Data frame received for 3\nI1014 13:36:19.227313 94 log.go:181] (0x267e690) (3) Data frame handling\nI1014 13:36:19.227437 94 log.go:181] (0x2badce0) (5) Data frame sent\nI1014 13:36:19.227556 94 log.go:181] (0x267e690) (3) Data frame sent\nI1014 13:36:19.227638 94 log.go:181] (0x2badab0) Data frame received for 3\nI1014 13:36:19.227695 94 log.go:181] (0x267e690) (3) Data frame handling\nI1014 13:36:19.227786 94 log.go:181] (0x267e690) (3) Data frame sent\nI1014 13:36:19.227863 94 log.go:181] (0x2badab0) Data frame received for 3\nI1014 13:36:19.227919 94 log.go:181] (0x267e690) (3) Data frame handling\nI1014 13:36:19.227983 94 log.go:181] (0x267e690) (3) Data frame sent\nI1014 13:36:19.228045 94 log.go:181] (0x2badab0) Data frame received for 3\nI1014 13:36:19.228099 94 log.go:181] (0x267e690) (3) Data frame handling\nI1014 13:36:19.228179 94 log.go:181] (0x267e690) (3) Data frame sent\nI1014 13:36:19.229440 94 log.go:181] (0x2badab0) Data frame received for 3\nI1014 13:36:19.229555 94 log.go:181] (0x267e690) (3) Data frame handling\nI1014 13:36:19.229692 94 log.go:181] (0x267e690) (3) Data frame sent\nI1014 13:36:19.229866 94 log.go:181] (0x2badab0) Data frame received for 3\nI1014 13:36:19.229961 94 log.go:181] (0x267e690) (3) Data frame handling\nI1014 13:36:19.230054 94 log.go:181] (0x267e690) (3) Data frame sent\nI1014 13:36:19.230153 94 log.go:181] (0x2badab0) Data frame received for 5\nI1014 13:36:19.230225 94 log.go:181] (0x2badce0) (5) Data frame handling\nI1014 13:36:19.230316 94 log.go:181] (0x2badce0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.183.114:80/\nI1014 13:36:19.232923 94 log.go:181] (0x2badab0) Data frame received for 3\nI1014 13:36:19.233024 94 log.go:181] (0x267e690) (3) Data frame handling\nI1014 13:36:19.233106 94 log.go:181] (0x267e690) (3) Data frame sent\nI1014 13:36:19.233512 94 log.go:181] (0x2badab0) Data frame received for 3\nI1014 13:36:19.233611 94 log.go:181] (0x267e690) (3) Data frame handling\nI1014 13:36:19.233694 94 log.go:181] (0x267e690) (3) Data frame sent\nI1014 13:36:19.233764 94 log.go:181] (0x2badab0) Data frame received for 5\nI1014 13:36:19.233829 94 log.go:181] (0x2badce0) (5) Data frame handling\nI1014 13:36:19.233895 94 log.go:181] (0x2badce0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.183.114:80/\nI1014 13:36:19.236503 94 log.go:181] (0x2badab0) Data frame received for 3\nI1014 13:36:19.236664 94 log.go:181] (0x267e690) (3) Data frame handling\nI1014 13:36:19.236763 94 log.go:181] (0x267e690) (3) Data frame sent\nI1014 13:36:19.236991 94 log.go:181] (0x2badab0) Data frame received for 3\nI1014 13:36:19.237063 94 log.go:181] (0x267e690) (3) Data frame handling\nI1014 13:36:19.237120 94 log.go:181] (0x267e690) (3) Data frame sent\nI1014 13:36:19.237173 94 log.go:181] (0x2badab0) Data frame received for 5\nI1014 13:36:19.237248 94 log.go:181] (0x2badce0) (5) Data frame handling\nI1014 13:36:19.237312 94 log.go:181] (0x2badce0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.183.114:80/\nI1014 13:36:19.240415 94 log.go:181] (0x2badab0) Data frame received for 3\nI1014 13:36:19.240483 94 log.go:181] (0x267e690) (3) Data frame handling\nI1014 13:36:19.240556 94 log.go:181] (0x267e690) (3) Data frame sent\nI1014 13:36:19.240961 94 log.go:181] (0x2badab0) Data frame received for 5\nI1014 13:36:19.241060 94 log.go:181] (0x2badce0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.183.114:80/\nI1014 13:36:19.241149 94 log.go:181] (0x2badab0) Data frame received for 3\nI1014 13:36:19.241255 94 log.go:181] (0x267e690) (3) Data frame handling\nI1014 13:36:19.241340 94 log.go:181] (0x2badce0) (5) Data frame sent\nI1014 13:36:19.241442 94 log.go:181] (0x267e690) (3) Data frame sent\nI1014 13:36:19.247058 94 log.go:181] (0x2badab0) Data frame received for 3\nI1014 13:36:19.247131 94 log.go:181] (0x267e690) (3) Data frame handling\nI1014 13:36:19.247206 94 log.go:181] (0x267e690) (3) Data frame sent\nI1014 13:36:19.247620 94 log.go:181] (0x2badab0) Data frame received for 5\nI1014 13:36:19.247727 94 log.go:181] (0x2badce0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.183.114:80/\nI1014 13:36:19.247837 94 log.go:181] (0x2badab0) Data frame received for 3\nI1014 13:36:19.248003 94 log.go:181] (0x267e690) (3) Data frame handling\nI1014 13:36:19.248119 94 log.go:181] (0x2badce0) (5) Data frame sent\nI1014 13:36:19.248209 94 log.go:181] (0x267e690) (3) Data frame sent\nI1014 13:36:19.251496 94 log.go:181] (0x2badab0) Data frame received for 3\nI1014 13:36:19.251590 94 log.go:181] (0x267e690) (3) Data frame handling\nI1014 13:36:19.251691 94 log.go:181] (0x267e690) (3) Data frame sent\nI1014 13:36:19.252316 94 log.go:181] (0x2badab0) Data frame received for 5\nI1014 13:36:19.252464 94 log.go:181] (0x2badce0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.183.114:80/\nI1014 13:36:19.252626 94 log.go:181] (0x2badab0) Data frame received for 3\nI1014 13:36:19.252786 94 log.go:181] (0x267e690) (3) Data frame handling\nI1014 13:36:19.252981 94 log.go:181] (0x2badce0) (5) Data frame sent\nI1014 13:36:19.253164 94 log.go:181] (0x267e690) (3) Data frame sent\nI1014 13:36:19.256780 94 log.go:181] (0x2badab0) Data frame received for 3\nI1014 13:36:19.257007 94 log.go:181] (0x267e690) (3) Data frame handling\nI1014 13:36:19.257170 94 log.go:181] (0x267e690) (3) Data frame sent\nI1014 13:36:19.258108 94 log.go:181] (0x2badab0) Data frame received for 3\nI1014 13:36:19.258207 94 log.go:181] (0x267e690) (3) Data frame handling\nI1014 13:36:19.258325 94 log.go:181] (0x2badab0) Data frame received for 5\nI1014 13:36:19.258475 94 log.go:181] (0x2badce0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.183.114:80/\nI1014 13:36:19.258571 94 log.go:181] (0x267e690) (3) Data frame sent\nI1014 13:36:19.258656 94 log.go:181] (0x2badce0) (5) Data frame sent\nI1014 13:36:19.263538 94 log.go:181] (0x2badab0) Data frame received for 3\nI1014 13:36:19.263637 94 log.go:181] (0x267e690) (3) Data frame handling\nI1014 13:36:19.263797 94 log.go:181] (0x267e690) (3) Data frame sent\nI1014 13:36:19.264190 94 log.go:181] (0x2badab0) Data frame received for 5\nI1014 13:36:19.264319 94 log.go:181] (0x2badab0) Data frame received for 3\nI1014 13:36:19.264486 94 log.go:181] (0x267e690) (3) Data frame handling\nI1014 13:36:19.264630 94 log.go:181] (0x2badce0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2I1014 13:36:19.264812 94 log.go:181] (0x267e690) (3) Data frame sent\nI1014 13:36:19.265069 94 log.go:181] (0x2badce0) (5) Data frame sent\nI1014 13:36:19.265225 94 log.go:181] (0x2badab0) Data frame received for 5\nI1014 13:36:19.265315 94 log.go:181] (0x2badce0) (5) Data frame handling\nI1014 13:36:19.265456 94 log.go:181] (0x2badce0) (5) Data frame sent\n http://10.102.183.114:80/\nI1014 13:36:19.268896 94 log.go:181] (0x2badab0) Data frame received for 3\nI1014 13:36:19.269011 94 log.go:181] (0x267e690) (3) Data frame handling\nI1014 13:36:19.269098 94 log.go:181] (0x267e690) (3) Data frame sent\nI1014 13:36:19.269479 94 log.go:181] (0x2badab0) Data frame received for 5\nI1014 13:36:19.269574 94 log.go:181] (0x2badce0) (5) Data frame handling\n+ echo\nI1014 13:36:19.269657 94 log.go:181] (0x2badab0) Data frame received for 3\nI1014 13:36:19.269816 94 log.go:181] (0x267e690) (3) Data frame handling\nI1014 13:36:19.269951 94 log.go:181] (0x2badce0) (5) Data frame sent\nI1014 13:36:19.270054 94 log.go:181] (0x2badab0) Data frame received for 5\nI1014 13:36:19.270153 94 log.go:181] (0x267e690) (3) Data frame sent\nI1014 13:36:19.270293 94 log.go:181] (0x2badce0) (5) Data frame handling\nI1014 13:36:19.270413 94 log.go:181] (0x2badce0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.102.183.114:80/\nI1014 13:36:19.275134 94 log.go:181] (0x2badab0) Data frame received for 3\nI1014 13:36:19.275216 94 log.go:181] (0x267e690) (3) Data frame handling\nI1014 13:36:19.275285 94 log.go:181] (0x267e690) (3) Data frame sent\nI1014 13:36:19.276019 94 log.go:181] (0x2badab0) Data frame received for 5\nI1014 13:36:19.276154 94 log.go:181] (0x2badce0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.183.114:80/\nI1014 13:36:19.276266 94 log.go:181] (0x2badab0) Data frame received for 3\nI1014 13:36:19.276662 94 log.go:181] (0x267e690) (3) Data frame handling\nI1014 13:36:19.276811 94 log.go:181] (0x2badce0) (5) Data frame sent\nI1014 13:36:19.277008 94 log.go:181] (0x267e690) (3) Data frame sent\nI1014 13:36:19.281467 94 log.go:181] (0x2badab0) Data frame received for 3\nI1014 13:36:19.281613 94 log.go:181] (0x267e690) (3) Data frame handling\nI1014 13:36:19.281794 94 log.go:181] (0x267e690) (3) Data frame sent\nI1014 13:36:19.281948 94 log.go:181] (0x2badab0) Data frame received for 5\nI1014 13:36:19.282012 94 log.go:181] (0x2badce0) (5) Data frame handling\nI1014 13:36:19.282102 94 log.go:181] (0x2badce0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.183.114:80/\nI1014 13:36:19.282253 94 log.go:181] (0x2badab0) Data frame received for 3\nI1014 13:36:19.282327 94 log.go:181] (0x267e690) (3) Data frame handling\nI1014 13:36:19.282407 94 log.go:181] (0x267e690) (3) Data frame sent\nI1014 13:36:19.288244 94 log.go:181] (0x2badab0) Data frame received for 3\nI1014 13:36:19.288344 94 log.go:181] (0x267e690) (3) Data frame handling\nI1014 13:36:19.288465 94 log.go:181] (0x267e690) (3) Data frame sent\nI1014 13:36:19.289267 94 log.go:181] (0x2badab0) Data frame received for 5\nI1014 13:36:19.289441 94 log.go:181] (0x2badce0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2I1014 13:36:19.289608 94 log.go:181] (0x2badab0) Data frame received for 3\nI1014 13:36:19.289736 94 log.go:181] (0x267e690) (3) Data frame handling\nI1014 13:36:19.289839 94 log.go:181] (0x2badce0) (5) Data frame sent\nI1014 13:36:19.290012 94 log.go:181] (0x2badab0) Data frame received for 5\nI1014 13:36:19.290199 94 log.go:181] (0x2badce0) (5) Data frame handling\n http://10.102.183.114:80/\nI1014 13:36:19.290372 94 log.go:181] (0x267e690) (3) Data frame sent\nI1014 13:36:19.290558 94 log.go:181] (0x2badce0) (5) Data frame sent\nI1014 13:36:19.295116 94 log.go:181] (0x2badab0) Data frame received for 3\nI1014 13:36:19.295235 94 log.go:181] (0x267e690) (3) Data frame handling\nI1014 13:36:19.295358 94 log.go:181] (0x267e690) (3) Data frame sent\nI1014 13:36:19.295789 94 log.go:181] (0x2badab0) Data frame received for 3\nI1014 13:36:19.295911 94 log.go:181] (0x2badab0) Data frame received for 5\nI1014 13:36:19.296088 94 log.go:181] (0x2badce0) (5) Data frame handling\nI1014 13:36:19.296200 94 log.go:181] (0x267e690) (3) Data frame handling\nI1014 13:36:19.297613 94 log.go:181] (0x2badab0) Data frame received for 1\nI1014 13:36:19.297733 94 log.go:181] (0x2badb20) (1) Data frame handling\nI1014 13:36:19.297856 94 log.go:181] (0x2badb20) (1) Data frame sent\nI1014 13:36:19.298364 94 log.go:181] (0x2badab0) (0x2badb20) Stream removed, broadcasting: 1\nI1014 13:36:19.301107 94 log.go:181] (0x2badab0) Go away received\nI1014 13:36:19.303672 94 log.go:181] (0x2badab0) (0x2badb20) Stream removed, broadcasting: 1\nI1014 13:36:19.304130 94 log.go:181] (0x2badab0) (0x267e690) Stream removed, broadcasting: 3\nI1014 13:36:19.304315 94 log.go:181] (0x2badab0) (0x2badce0) Stream removed, broadcasting: 5\n" Oct 14 13:36:19.319: INFO: stdout: "\naffinity-clusterip-transition-mfhn6\naffinity-clusterip-transition-mfhn6\naffinity-clusterip-transition-mfhn6\naffinity-clusterip-transition-mfhn6\naffinity-clusterip-transition-mfhn6\naffinity-clusterip-transition-mfhn6\naffinity-clusterip-transition-mfhn6\naffinity-clusterip-transition-mfhn6\naffinity-clusterip-transition-mfhn6\naffinity-clusterip-transition-mfhn6\naffinity-clusterip-transition-mfhn6\naffinity-clusterip-transition-mfhn6\naffinity-clusterip-transition-mfhn6\naffinity-clusterip-transition-mfhn6\naffinity-clusterip-transition-mfhn6\naffinity-clusterip-transition-mfhn6" Oct 14 13:36:19.319: INFO: Received response from host: affinity-clusterip-transition-mfhn6 Oct 14 13:36:19.319: INFO: Received response from host: affinity-clusterip-transition-mfhn6 Oct 14 13:36:19.319: INFO: Received response from host: affinity-clusterip-transition-mfhn6 Oct 14 13:36:19.319: INFO: Received response from host: affinity-clusterip-transition-mfhn6 Oct 14 13:36:19.319: INFO: Received response from host: affinity-clusterip-transition-mfhn6 Oct 14 13:36:19.319: INFO: Received response from host: affinity-clusterip-transition-mfhn6 Oct 14 13:36:19.319: INFO: Received response from host: affinity-clusterip-transition-mfhn6 Oct 14 13:36:19.319: INFO: Received response from host: affinity-clusterip-transition-mfhn6 Oct 14 13:36:19.319: INFO: Received response from host: affinity-clusterip-transition-mfhn6 Oct 14 13:36:19.319: INFO: Received response from host: affinity-clusterip-transition-mfhn6 Oct 14 13:36:19.320: INFO: Received response from host: affinity-clusterip-transition-mfhn6 Oct 14 13:36:19.320: INFO: Received response from host: affinity-clusterip-transition-mfhn6 Oct 14 13:36:19.320: INFO: Received response from host: affinity-clusterip-transition-mfhn6 Oct 14 13:36:19.320: INFO: Received response from host: affinity-clusterip-transition-mfhn6 Oct 14 13:36:19.320: INFO: Received response from host: affinity-clusterip-transition-mfhn6 Oct 14 13:36:19.320: INFO: Received response from host: affinity-clusterip-transition-mfhn6 Oct 14 13:36:19.320: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-5332, will wait for the garbage collector to delete the pods Oct 14 13:36:19.484: INFO: Deleting ReplicationController affinity-clusterip-transition took: 6.939204ms Oct 14 13:36:20.084: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 600.721386ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:36:35.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5332" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:40.076 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":303,"completed":5,"skipped":140,"failed":0} S ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:36:35.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-245/configmap-test-9bbc522e-ec11-4f8d-9689-97e939a22e4f STEP: Creating a pod to test consume configMaps Oct 14 13:36:35.919: INFO: Waiting up to 5m0s for pod "pod-configmaps-1b2bb75c-2784-457d-803e-d6b8198ed756" in namespace "configmap-245" to be "Succeeded or Failed" Oct 14 13:36:35.943: INFO: Pod "pod-configmaps-1b2bb75c-2784-457d-803e-d6b8198ed756": Phase="Pending", Reason="", readiness=false. Elapsed: 24.471485ms Oct 14 13:36:37.951: INFO: Pod "pod-configmaps-1b2bb75c-2784-457d-803e-d6b8198ed756": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031791585s Oct 14 13:36:39.961: INFO: Pod "pod-configmaps-1b2bb75c-2784-457d-803e-d6b8198ed756": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041819151s STEP: Saw pod success Oct 14 13:36:39.961: INFO: Pod "pod-configmaps-1b2bb75c-2784-457d-803e-d6b8198ed756" satisfied condition "Succeeded or Failed" Oct 14 13:36:39.965: INFO: Trying to get logs from node latest-worker pod pod-configmaps-1b2bb75c-2784-457d-803e-d6b8198ed756 container env-test: STEP: delete the pod Oct 14 13:36:40.169: INFO: Waiting for pod pod-configmaps-1b2bb75c-2784-457d-803e-d6b8198ed756 to disappear Oct 14 13:36:40.195: INFO: Pod pod-configmaps-1b2bb75c-2784-457d-803e-d6b8198ed756 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:36:40.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-245" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":303,"completed":6,"skipped":141,"failed":0} SSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:36:40.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Oct 14 13:36:44.905: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:36:45.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2799" for this suite. • [SLOW TEST:5.043 seconds] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":303,"completed":7,"skipped":144,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:36:45.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 14 13:36:45.751: INFO: Waiting up to 5m0s for pod "downwardapi-volume-456e6667-e4e1-4f91-90af-7fa8fa1aff0d" in namespace "projected-1768" to be "Succeeded or Failed" Oct 14 13:36:45.795: INFO: Pod "downwardapi-volume-456e6667-e4e1-4f91-90af-7fa8fa1aff0d": Phase="Pending", Reason="", readiness=false. Elapsed: 43.400927ms Oct 14 13:36:47.803: INFO: Pod "downwardapi-volume-456e6667-e4e1-4f91-90af-7fa8fa1aff0d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050787544s Oct 14 13:36:49.811: INFO: Pod "downwardapi-volume-456e6667-e4e1-4f91-90af-7fa8fa1aff0d": Phase="Running", Reason="", readiness=true. Elapsed: 4.058984408s Oct 14 13:36:51.818: INFO: Pod "downwardapi-volume-456e6667-e4e1-4f91-90af-7fa8fa1aff0d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.066336576s STEP: Saw pod success Oct 14 13:36:51.818: INFO: Pod "downwardapi-volume-456e6667-e4e1-4f91-90af-7fa8fa1aff0d" satisfied condition "Succeeded or Failed" Oct 14 13:36:51.824: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-456e6667-e4e1-4f91-90af-7fa8fa1aff0d container client-container: STEP: delete the pod Oct 14 13:36:51.886: INFO: Waiting for pod downwardapi-volume-456e6667-e4e1-4f91-90af-7fa8fa1aff0d to disappear Oct 14 13:36:51.896: INFO: Pod downwardapi-volume-456e6667-e4e1-4f91-90af-7fa8fa1aff0d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:36:51.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1768" for this suite. • [SLOW TEST:6.655 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":303,"completed":8,"skipped":161,"failed":0} SSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:36:51.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from NodePort to ExternalName [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service nodeport-service with the type=NodePort in namespace services-5091 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-5091 STEP: creating replication controller externalsvc in namespace services-5091 I1014 13:36:52.245145 11 runners.go:190] Created replication controller with name: externalsvc, namespace: services-5091, replica count: 2 I1014 13:36:55.297132 11 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1014 13:36:58.298134 11 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Oct 14 13:36:58.413: INFO: Creating new exec pod Oct 14 13:37:02.461: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-5091 execpodcmtf7 -- /bin/sh -x -c nslookup nodeport-service.services-5091.svc.cluster.local' Oct 14 13:37:04.068: INFO: stderr: "I1014 13:37:03.966658 115 log.go:181] (0x2b3fce0) (0x2b3fd50) Create stream\nI1014 13:37:03.969233 115 log.go:181] (0x2b3fce0) (0x2b3fd50) Stream added, broadcasting: 1\nI1014 13:37:03.979088 115 log.go:181] (0x2b3fce0) Reply frame received for 1\nI1014 13:37:03.979582 115 log.go:181] (0x2b3fce0) (0x2b3fea0) Create stream\nI1014 13:37:03.979652 115 log.go:181] (0x2b3fce0) (0x2b3fea0) Stream added, broadcasting: 3\nI1014 13:37:03.981178 115 log.go:181] (0x2b3fce0) Reply frame received for 3\nI1014 13:37:03.981440 115 log.go:181] (0x2b3fce0) (0x2bae070) Create stream\nI1014 13:37:03.981516 115 log.go:181] (0x2b3fce0) (0x2bae070) Stream added, broadcasting: 5\nI1014 13:37:03.982727 115 log.go:181] (0x2b3fce0) Reply frame received for 5\nI1014 13:37:04.039723 115 log.go:181] (0x2b3fce0) Data frame received for 5\nI1014 13:37:04.040107 115 log.go:181] (0x2bae070) (5) Data frame handling\n+ nslookup nodeport-service.services-5091.svc.cluster.local\nI1014 13:37:04.041018 115 log.go:181] (0x2bae070) (5) Data frame sent\nI1014 13:37:04.047748 115 log.go:181] (0x2b3fce0) Data frame received for 3\nI1014 13:37:04.047852 115 log.go:181] (0x2b3fea0) (3) Data frame handling\nI1014 13:37:04.047974 115 log.go:181] (0x2b3fea0) (3) Data frame sent\nI1014 13:37:04.049182 115 log.go:181] (0x2b3fce0) Data frame received for 3\nI1014 13:37:04.049265 115 log.go:181] (0x2b3fea0) (3) Data frame handling\nI1014 13:37:04.049352 115 log.go:181] (0x2b3fea0) (3) Data frame sent\nI1014 13:37:04.049591 115 log.go:181] (0x2b3fce0) Data frame received for 5\nI1014 13:37:04.049692 115 log.go:181] (0x2bae070) (5) Data frame handling\nI1014 13:37:04.049859 115 log.go:181] (0x2b3fce0) Data frame received for 3\nI1014 13:37:04.050016 115 log.go:181] (0x2b3fea0) (3) Data frame handling\nI1014 13:37:04.051685 115 log.go:181] (0x2b3fce0) Data frame received for 1\nI1014 13:37:04.051788 115 log.go:181] (0x2b3fd50) (1) Data frame handling\nI1014 13:37:04.051913 115 log.go:181] (0x2b3fd50) (1) Data frame sent\nI1014 13:37:04.052606 115 log.go:181] (0x2b3fce0) (0x2b3fd50) Stream removed, broadcasting: 1\nI1014 13:37:04.055453 115 log.go:181] (0x2b3fce0) Go away received\nI1014 13:37:04.057639 115 log.go:181] (0x2b3fce0) (0x2b3fd50) Stream removed, broadcasting: 1\nI1014 13:37:04.057943 115 log.go:181] (0x2b3fce0) (0x2b3fea0) Stream removed, broadcasting: 3\nI1014 13:37:04.058285 115 log.go:181] (0x2b3fce0) (0x2bae070) Stream removed, broadcasting: 5\n" Oct 14 13:37:04.069: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-5091.svc.cluster.local\tcanonical name = externalsvc.services-5091.svc.cluster.local.\nName:\texternalsvc.services-5091.svc.cluster.local\nAddress: 10.107.195.209\n\n" STEP: deleting ReplicationController externalsvc in namespace services-5091, will wait for the garbage collector to delete the pods Oct 14 13:37:04.140: INFO: Deleting ReplicationController externalsvc took: 9.134089ms Oct 14 13:37:04.541: INFO: Terminating ReplicationController externalsvc pods took: 400.853488ms Oct 14 13:37:15.795: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:37:15.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5091" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:23.936 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":303,"completed":9,"skipped":167,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:37:15.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod test-webserver-fc8ced73-9add-416e-8958-4705693d8eb2 in namespace container-probe-1330 Oct 14 13:37:21.972: INFO: Started pod test-webserver-fc8ced73-9add-416e-8958-4705693d8eb2 in namespace container-probe-1330 STEP: checking the pod's current state and verifying that restartCount is present Oct 14 13:37:21.977: INFO: Initial restart count of pod test-webserver-fc8ced73-9add-416e-8958-4705693d8eb2 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:41:23.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1330" for this suite. • [SLOW TEST:247.885 seconds] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":303,"completed":10,"skipped":182,"failed":0} SSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:41:23.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 13:41:24.047: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-c89ebaa8-b428-436f-aaff-8d093d0893c2" in namespace "security-context-test-704" to be "Succeeded or Failed" Oct 14 13:41:24.194: INFO: Pod "alpine-nnp-false-c89ebaa8-b428-436f-aaff-8d093d0893c2": Phase="Pending", Reason="", readiness=false. Elapsed: 146.997508ms Oct 14 13:41:26.265: INFO: Pod "alpine-nnp-false-c89ebaa8-b428-436f-aaff-8d093d0893c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217665832s Oct 14 13:41:28.272: INFO: Pod "alpine-nnp-false-c89ebaa8-b428-436f-aaff-8d093d0893c2": Phase="Running", Reason="", readiness=true. Elapsed: 4.224992629s Oct 14 13:41:30.280: INFO: Pod "alpine-nnp-false-c89ebaa8-b428-436f-aaff-8d093d0893c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.232491486s Oct 14 13:41:30.280: INFO: Pod "alpine-nnp-false-c89ebaa8-b428-436f-aaff-8d093d0893c2" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:41:30.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-704" for this suite. • [SLOW TEST:6.582 seconds] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when creating containers with AllowPrivilegeEscalation /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":11,"skipped":187,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:41:30.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Oct 14 13:41:35.011: INFO: Successfully updated pod "labelsupdate9700f581-bb7d-4066-82f6-606e387c110e" [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:41:39.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5279" for this suite. • [SLOW TEST:8.773 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":303,"completed":12,"skipped":195,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:41:39.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Oct 14 13:41:49.202: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Oct 14 13:41:51.523: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738279709, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738279709, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738279709, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738279709, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-85d57b96d6\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 14 13:41:54.563: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 13:41:54.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:41:55.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-8492" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:16.916 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":303,"completed":13,"skipped":197,"failed":0} SSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:41:56.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-3597 STEP: creating a selector STEP: Creating the service pods in kubernetes Oct 14 13:41:56.131: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Oct 14 13:41:56.189: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 14 13:41:58.200: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 14 13:42:00.254: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 14 13:42:02.198: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 14 13:42:04.197: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 14 13:42:06.198: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 14 13:42:08.198: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 14 13:42:10.197: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 14 13:42:12.197: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 14 13:42:14.198: INFO: The status of Pod netserver-0 is Running (Ready = true) Oct 14 13:42:14.210: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Oct 14 13:42:18.270: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.126:8080/dial?request=hostname&protocol=http&host=10.244.2.125&port=8080&tries=1'] Namespace:pod-network-test-3597 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 14 13:42:18.271: INFO: >>> kubeConfig: /root/.kube/config I1014 13:42:18.392257 11 log.go:181] (0x8e6bea0) (0x8e6bf10) Create stream I1014 13:42:18.393009 11 log.go:181] (0x8e6bea0) (0x8e6bf10) Stream added, broadcasting: 1 I1014 13:42:18.410792 11 log.go:181] (0x8e6bea0) Reply frame received for 1 I1014 13:42:18.411288 11 log.go:181] (0x8e6bea0) (0x8dbe0e0) Create stream I1014 13:42:18.411357 11 log.go:181] (0x8e6bea0) (0x8dbe0e0) Stream added, broadcasting: 3 I1014 13:42:18.412981 11 log.go:181] (0x8e6bea0) Reply frame received for 3 I1014 13:42:18.413220 11 log.go:181] (0x8e6bea0) (0x8b380e0) Create stream I1014 13:42:18.413304 11 log.go:181] (0x8e6bea0) (0x8b380e0) Stream added, broadcasting: 5 I1014 13:42:18.414632 11 log.go:181] (0x8e6bea0) Reply frame received for 5 I1014 13:42:18.515064 11 log.go:181] (0x8e6bea0) Data frame received for 3 I1014 13:42:18.515409 11 log.go:181] (0x8e6bea0) Data frame received for 5 I1014 13:42:18.515522 11 log.go:181] (0x8b380e0) (5) Data frame handling I1014 13:42:18.515687 11 log.go:181] (0x8dbe0e0) (3) Data frame handling I1014 13:42:18.516370 11 log.go:181] (0x8dbe0e0) (3) Data frame sent I1014 13:42:18.516966 11 log.go:181] (0x8e6bea0) Data frame received for 1 I1014 13:42:18.517141 11 log.go:181] (0x8e6bf10) (1) Data frame handling I1014 13:42:18.517286 11 log.go:181] (0x8e6bea0) Data frame received for 3 I1014 13:42:18.517470 11 log.go:181] (0x8dbe0e0) (3) Data frame handling I1014 13:42:18.517578 11 log.go:181] (0x8e6bf10) (1) Data frame sent I1014 13:42:18.518431 11 log.go:181] (0x8e6bea0) (0x8e6bf10) Stream removed, broadcasting: 1 I1014 13:42:18.520126 11 log.go:181] (0x8e6bea0) Go away received I1014 13:42:18.523028 11 log.go:181] (0x8e6bea0) (0x8e6bf10) Stream removed, broadcasting: 1 I1014 13:42:18.523288 11 log.go:181] (0x8e6bea0) (0x8dbe0e0) Stream removed, broadcasting: 3 I1014 13:42:18.523532 11 log.go:181] (0x8e6bea0) (0x8b380e0) Stream removed, broadcasting: 5 Oct 14 13:42:18.524: INFO: Waiting for responses: map[] Oct 14 13:42:18.543: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.126:8080/dial?request=hostname&protocol=http&host=10.244.1.119&port=8080&tries=1'] Namespace:pod-network-test-3597 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 14 13:42:18.544: INFO: >>> kubeConfig: /root/.kube/config I1014 13:42:18.658923 11 log.go:181] (0x8d2ca80) (0x8d2cbd0) Create stream I1014 13:42:18.659127 11 log.go:181] (0x8d2ca80) (0x8d2cbd0) Stream added, broadcasting: 1 I1014 13:42:18.663807 11 log.go:181] (0x8d2ca80) Reply frame received for 1 I1014 13:42:18.664035 11 log.go:181] (0x8d2ca80) (0x6be41c0) Create stream I1014 13:42:18.664155 11 log.go:181] (0x8d2ca80) (0x6be41c0) Stream added, broadcasting: 3 I1014 13:42:18.665821 11 log.go:181] (0x8d2ca80) Reply frame received for 3 I1014 13:42:18.665994 11 log.go:181] (0x8d2ca80) (0x6be4460) Create stream I1014 13:42:18.666068 11 log.go:181] (0x8d2ca80) (0x6be4460) Stream added, broadcasting: 5 I1014 13:42:18.667346 11 log.go:181] (0x8d2ca80) Reply frame received for 5 I1014 13:42:18.731859 11 log.go:181] (0x8d2ca80) Data frame received for 3 I1014 13:42:18.732115 11 log.go:181] (0x6be41c0) (3) Data frame handling I1014 13:42:18.732280 11 log.go:181] (0x8d2ca80) Data frame received for 5 I1014 13:42:18.732409 11 log.go:181] (0x6be4460) (5) Data frame handling I1014 13:42:18.732506 11 log.go:181] (0x6be41c0) (3) Data frame sent I1014 13:42:18.732631 11 log.go:181] (0x8d2ca80) Data frame received for 3 I1014 13:42:18.732749 11 log.go:181] (0x6be41c0) (3) Data frame handling I1014 13:42:18.733524 11 log.go:181] (0x8d2ca80) Data frame received for 1 I1014 13:42:18.733606 11 log.go:181] (0x8d2cbd0) (1) Data frame handling I1014 13:42:18.733679 11 log.go:181] (0x8d2cbd0) (1) Data frame sent I1014 13:42:18.733761 11 log.go:181] (0x8d2ca80) (0x8d2cbd0) Stream removed, broadcasting: 1 I1014 13:42:18.733988 11 log.go:181] (0x8d2ca80) Go away received I1014 13:42:18.734279 11 log.go:181] (0x8d2ca80) (0x8d2cbd0) Stream removed, broadcasting: 1 I1014 13:42:18.734370 11 log.go:181] (0x8d2ca80) (0x6be41c0) Stream removed, broadcasting: 3 I1014 13:42:18.734471 11 log.go:181] (0x8d2ca80) (0x6be4460) Stream removed, broadcasting: 5 Oct 14 13:42:18.734: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:42:18.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3597" for this suite. • [SLOW TEST:22.736 seconds] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":303,"completed":14,"skipped":203,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:42:18.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-4c80d045-2917-403c-bb60-200388c3370f STEP: Creating a pod to test consume secrets Oct 14 13:42:18.816: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-67d801d5-9d01-4239-bf3d-bbf8d2e8fb71" in namespace "projected-1253" to be "Succeeded or Failed" Oct 14 13:42:18.830: INFO: Pod "pod-projected-secrets-67d801d5-9d01-4239-bf3d-bbf8d2e8fb71": Phase="Pending", Reason="", readiness=false. Elapsed: 13.894869ms Oct 14 13:42:20.923: INFO: Pod "pod-projected-secrets-67d801d5-9d01-4239-bf3d-bbf8d2e8fb71": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106210275s Oct 14 13:42:23.045: INFO: Pod "pod-projected-secrets-67d801d5-9d01-4239-bf3d-bbf8d2e8fb71": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.228905092s STEP: Saw pod success Oct 14 13:42:23.046: INFO: Pod "pod-projected-secrets-67d801d5-9d01-4239-bf3d-bbf8d2e8fb71" satisfied condition "Succeeded or Failed" Oct 14 13:42:23.051: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-67d801d5-9d01-4239-bf3d-bbf8d2e8fb71 container projected-secret-volume-test: STEP: delete the pod Oct 14 13:42:23.133: INFO: Waiting for pod pod-projected-secrets-67d801d5-9d01-4239-bf3d-bbf8d2e8fb71 to disappear Oct 14 13:42:23.163: INFO: Pod pod-projected-secrets-67d801d5-9d01-4239-bf3d-bbf8d2e8fb71 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:42:23.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1253" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":15,"skipped":224,"failed":0} SSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:42:23.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-9c79f068-65d3-433e-8441-d92ecd0677bf STEP: Creating secret with name s-test-opt-upd-8ff1bd52-91a7-4f2b-9824-91c7e07a7692 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-9c79f068-65d3-433e-8441-d92ecd0677bf STEP: Updating secret s-test-opt-upd-8ff1bd52-91a7-4f2b-9824-91c7e07a7692 STEP: Creating secret with name s-test-opt-create-37995eb4-1878-42be-910b-89029f093d15 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:43:35.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7340" for this suite. • [SLOW TEST:72.805 seconds] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":16,"skipped":231,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:43:35.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] deployment should support proportional scaling [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 13:43:36.193: INFO: Creating deployment "webserver-deployment" Oct 14 13:43:36.202: INFO: Waiting for observed generation 1 Oct 14 13:43:38.859: INFO: Waiting for all required pods to come up Oct 14 13:43:38.887: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Oct 14 13:43:51.082: INFO: Waiting for deployment "webserver-deployment" to complete Oct 14 13:43:51.090: INFO: Updating deployment "webserver-deployment" with a non-existent image Oct 14 13:43:51.103: INFO: Updating deployment webserver-deployment Oct 14 13:43:51.103: INFO: Waiting for observed generation 2 Oct 14 13:43:53.142: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Oct 14 13:43:53.149: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Oct 14 13:43:53.154: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Oct 14 13:43:53.167: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Oct 14 13:43:53.168: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Oct 14 13:43:53.173: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Oct 14 13:43:53.182: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Oct 14 13:43:53.182: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Oct 14 13:43:53.193: INFO: Updating deployment webserver-deployment Oct 14 13:43:53.194: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Oct 14 13:43:53.895: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Oct 14 13:43:54.530: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Oct 14 13:43:57.684: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-2704 /apis/apps/v1/namespaces/deployment-2704/deployments/webserver-deployment 07b1961b-69a4-4022-b17b-727c9a6e8477 1126184 3 2020-10-14 13:43:36 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-10-14 13:43:53 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-10-14 13:43:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x8ad5f08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-10-14 13:43:53 +0000 UTC,LastTransitionTime:2020-10-14 13:43:53 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-795d758f88" is progressing.,LastUpdateTime:2020-10-14 13:43:54 +0000 UTC,LastTransitionTime:2020-10-14 13:43:36 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Oct 14 13:43:58.445: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88 deployment-2704 /apis/apps/v1/namespaces/deployment-2704/replicasets/webserver-deployment-795d758f88 d4ff6f7d-8692-4f3a-8ae1-f654b8e972e9 1126177 3 2020-10-14 13:43:51 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 07b1961b-69a4-4022-b17b-727c9a6e8477 0x66a58e7 0x66a58e8}] [] [{kube-controller-manager Update apps/v1 2020-10-14 13:43:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"07b1961b-69a4-4022-b17b-727c9a6e8477\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x66a5978 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Oct 14 13:43:58.445: INFO: All old ReplicaSets of Deployment "webserver-deployment": Oct 14 13:43:58.446: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-dd94f59b7 deployment-2704 /apis/apps/v1/namespaces/deployment-2704/replicasets/webserver-deployment-dd94f59b7 c6c0a0af-7e08-4ffd-8f5c-31158aebb611 1126172 3 2020-10-14 13:43:36 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 07b1961b-69a4-4022-b17b-727c9a6e8477 0x66a59d7 0x66a59d8}] [] [{kube-controller-manager Update apps/v1 2020-10-14 13:43:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"07b1961b-69a4-4022-b17b-727c9a6e8477\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: dd94f59b7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x66a5a48 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Oct 14 13:43:58.735: INFO: Pod "webserver-deployment-795d758f88-66rdh" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-66rdh webserver-deployment-795d758f88- deployment-2704 /api/v1/namespaces/deployment-2704/pods/webserver-deployment-795d758f88-66rdh 02a7dcd8-4a03-46e9-a1bf-d645cf736abb 1126104 0 2020-10-14 13:43:51 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 d4ff6f7d-8692-4f3a-8ae1-f654b8e972e9 0x82891e7 0x82891e8}] [] [{kube-controller-manager Update v1 2020-10-14 13:43:51 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4ff6f7d-8692-4f3a-8ae1-f654b8e972e9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-14 13:43:52 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6skns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6skns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6skns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-10-14 13:43:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 13:43:58.737: INFO: Pod "webserver-deployment-795d758f88-9wxlz" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-9wxlz webserver-deployment-795d758f88- deployment-2704 /api/v1/namespaces/deployment-2704/pods/webserver-deployment-795d758f88-9wxlz 55c17c8e-211c-405b-a5cf-58298d9b96cd 1126233 0 2020-10-14 13:43:54 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 d4ff6f7d-8692-4f3a-8ae1-f654b8e972e9 0x82893a7 0x82893a8}] [] [{kube-controller-manager Update v1 2020-10-14 13:43:54 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4ff6f7d-8692-4f3a-8ae1-f654b8e972e9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-14 13:43:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6skns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6skns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6skns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-10-14 13:43:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 13:43:58.740: INFO: Pod "webserver-deployment-795d758f88-c692m" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-c692m webserver-deployment-795d758f88- deployment-2704 /api/v1/namespaces/deployment-2704/pods/webserver-deployment-795d758f88-c692m 5e00639e-7b1e-4256-bd57-05bad2377895 1126253 0 2020-10-14 13:43:51 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 d4ff6f7d-8692-4f3a-8ae1-f654b8e972e9 0x8289557 0x8289558}] [] [{kube-controller-manager Update v1 2020-10-14 13:43:51 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4ff6f7d-8692-4f3a-8ae1-f654b8e972e9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-14 13:43:57 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.126\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6skns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6skns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6skns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.1.126,StartTime:2020-10-14 13:43:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.126,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 13:43:58.742: INFO: Pod "webserver-deployment-795d758f88-cdpr9" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-cdpr9 webserver-deployment-795d758f88- deployment-2704 /api/v1/namespaces/deployment-2704/pods/webserver-deployment-795d758f88-cdpr9 db064306-aa65-4781-9545-9edbf030efdb 1126100 0 2020-10-14 13:43:51 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 d4ff6f7d-8692-4f3a-8ae1-f654b8e972e9 0x8289737 0x8289738}] [] [{kube-controller-manager Update v1 2020-10-14 13:43:51 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4ff6f7d-8692-4f3a-8ae1-f654b8e972e9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-14 13:43:51 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6skns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6skns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6skns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-10-14 13:43:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 13:43:58.745: INFO: Pod "webserver-deployment-795d758f88-fmx7v" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-fmx7v webserver-deployment-795d758f88- deployment-2704 /api/v1/namespaces/deployment-2704/pods/webserver-deployment-795d758f88-fmx7v bf07124e-8e58-41e4-bf59-62cfc50a7041 1126238 0 2020-10-14 13:43:54 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 d4ff6f7d-8692-4f3a-8ae1-f654b8e972e9 0x82898e7 0x82898e8}] [] [{kube-controller-manager Update v1 2020-10-14 13:43:54 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4ff6f7d-8692-4f3a-8ae1-f654b8e972e9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-14 13:43:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6skns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6skns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6skns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-10-14 13:43:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 13:43:58.746: INFO: Pod "webserver-deployment-795d758f88-hrbvf" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-hrbvf webserver-deployment-795d758f88- deployment-2704 /api/v1/namespaces/deployment-2704/pods/webserver-deployment-795d758f88-hrbvf 0727c954-d91e-45dc-a254-6d903f03fe7f 1126251 0 2020-10-14 13:43:51 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 d4ff6f7d-8692-4f3a-8ae1-f654b8e972e9 0x8289a97 0x8289a98}] [] [{kube-controller-manager Update v1 2020-10-14 13:43:51 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4ff6f7d-8692-4f3a-8ae1-f654b8e972e9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-14 13:43:57 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.137\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6skns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6skns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6skns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.2.137,StartTime:2020-10-14 13:43:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.137,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 13:43:58.748: INFO: Pod "webserver-deployment-795d758f88-jq49b" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-jq49b webserver-deployment-795d758f88- deployment-2704 /api/v1/namespaces/deployment-2704/pods/webserver-deployment-795d758f88-jq49b 7a1ce18f-1588-414f-8e94-5a611690b0ef 1126102 0 2020-10-14 13:43:51 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 d4ff6f7d-8692-4f3a-8ae1-f654b8e972e9 0x8289c77 0x8289c78}] [] [{kube-controller-manager Update v1 2020-10-14 13:43:51 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4ff6f7d-8692-4f3a-8ae1-f654b8e972e9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-14 13:43:51 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6skns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6skns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6skns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-10-14 13:43:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 13:43:58.749: INFO: Pod "webserver-deployment-795d758f88-mbnm7" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-mbnm7 webserver-deployment-795d758f88- deployment-2704 /api/v1/namespaces/deployment-2704/pods/webserver-deployment-795d758f88-mbnm7 de6ab330-71bd-4620-9f3a-d9924d0e70d1 1126240 0 2020-10-14 13:43:54 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 d4ff6f7d-8692-4f3a-8ae1-f654b8e972e9 0x8289e27 0x8289e28}] [] [{kube-controller-manager Update v1 2020-10-14 13:43:54 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4ff6f7d-8692-4f3a-8ae1-f654b8e972e9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-14 13:43:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6skns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6skns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6skns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-10-14 13:43:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 13:43:58.751: INFO: Pod "webserver-deployment-795d758f88-p9wgn" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-p9wgn webserver-deployment-795d758f88- deployment-2704 /api/v1/namespaces/deployment-2704/pods/webserver-deployment-795d758f88-p9wgn 59401447-a07e-49ad-b5ec-ec1b04acfb20 1126219 0 2020-10-14 13:43:54 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 d4ff6f7d-8692-4f3a-8ae1-f654b8e972e9 0x8289ff7 0x8289ff8}] [] [{kube-controller-manager Update v1 2020-10-14 13:43:54 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4ff6f7d-8692-4f3a-8ae1-f654b8e972e9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-14 13:43:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6skns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6skns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6skns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-10-14 13:43:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 13:43:58.752: INFO: Pod "webserver-deployment-795d758f88-qxf46" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-qxf46 webserver-deployment-795d758f88- deployment-2704 /api/v1/namespaces/deployment-2704/pods/webserver-deployment-795d758f88-qxf46 b8179b4f-b59c-4051-ba91-4d1f8ce629ea 1126198 0 2020-10-14 13:43:53 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 d4ff6f7d-8692-4f3a-8ae1-f654b8e972e9 0x8eaa1a7 0x8eaa1a8}] [] [{kube-controller-manager Update v1 2020-10-14 13:43:53 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4ff6f7d-8692-4f3a-8ae1-f654b8e972e9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-14 13:43:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6skns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6skns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6skns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-10-14 13:43:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 13:43:58.753: INFO: Pod "webserver-deployment-795d758f88-r9khj" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-r9khj webserver-deployment-795d758f88- deployment-2704 /api/v1/namespaces/deployment-2704/pods/webserver-deployment-795d758f88-r9khj a5befcde-cdd8-419d-b231-00844bc50de0 1126248 0 2020-10-14 13:43:54 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 d4ff6f7d-8692-4f3a-8ae1-f654b8e972e9 0x8eaa367 0x8eaa368}] [] [{kube-controller-manager Update v1 2020-10-14 13:43:54 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4ff6f7d-8692-4f3a-8ae1-f654b8e972e9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-14 13:43:57 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6skns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6skns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6skns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-10-14 13:43:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 13:43:58.755: INFO: Pod "webserver-deployment-795d758f88-rrfhq" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-rrfhq webserver-deployment-795d758f88- deployment-2704 /api/v1/namespaces/deployment-2704/pods/webserver-deployment-795d758f88-rrfhq 3d12dbd7-faca-4177-b6cc-1864624e1c28 1126241 0 2020-10-14 13:43:54 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 d4ff6f7d-8692-4f3a-8ae1-f654b8e972e9 0x8eaa527 0x8eaa528}] [] [{kube-controller-manager Update v1 2020-10-14 13:43:54 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4ff6f7d-8692-4f3a-8ae1-f654b8e972e9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-14 13:43:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6skns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6skns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6skns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-10-14 13:43:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 13:43:58.757: INFO: Pod "webserver-deployment-795d758f88-vhnv5" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-vhnv5 webserver-deployment-795d758f88- deployment-2704 /api/v1/namespaces/deployment-2704/pods/webserver-deployment-795d758f88-vhnv5 77aa4b45-f967-4082-8bd5-9c0dd66ada8a 1126216 0 2020-10-14 13:43:54 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 d4ff6f7d-8692-4f3a-8ae1-f654b8e972e9 0x8eaa6e7 0x8eaa6e8}] [] [{kube-controller-manager Update v1 2020-10-14 13:43:54 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4ff6f7d-8692-4f3a-8ae1-f654b8e972e9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-14 13:43:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6skns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6skns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6skns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-10-14 13:43:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 13:43:58.758: INFO: Pod "webserver-deployment-dd94f59b7-26pgs" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-26pgs webserver-deployment-dd94f59b7- deployment-2704 /api/v1/namespaces/deployment-2704/pods/webserver-deployment-dd94f59b7-26pgs 55339ff7-357c-42aa-a76d-233769650f1f 1126207 0 2020-10-14 13:43:54 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 c6c0a0af-7e08-4ffd-8f5c-31158aebb611 0x8eaa8e7 0x8eaa8e8}] [] [{kube-controller-manager Update v1 2020-10-14 13:43:54 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c6c0a0af-7e08-4ffd-8f5c-31158aebb611\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-14 13:43:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6skns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6skns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6skns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-10-14 13:43:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 13:43:58.759: INFO: Pod "webserver-deployment-dd94f59b7-7sgv7" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-7sgv7 webserver-deployment-dd94f59b7- deployment-2704 /api/v1/namespaces/deployment-2704/pods/webserver-deployment-dd94f59b7-7sgv7 107534c9-b18f-4890-8296-cc23fa26cf9f 1126030 0 2020-10-14 13:43:36 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 c6c0a0af-7e08-4ffd-8f5c-31158aebb611 0x8eaaa77 0x8eaaa78}] [] [{kube-controller-manager Update v1 2020-10-14 13:43:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c6c0a0af-7e08-4ffd-8f5c-31158aebb611\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-14 13:43:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.134\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6skns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6skns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6skns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.2.134,StartTime:2020-10-14 13:43:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-14 13:43:48 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://939daf653a31d8aa215072fd6527d90164bf2eb521efb93e0ae4125a02e6b849,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.134,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 13:43:58.761: INFO: Pod "webserver-deployment-dd94f59b7-87krc" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-87krc webserver-deployment-dd94f59b7- deployment-2704 /api/v1/namespaces/deployment-2704/pods/webserver-deployment-dd94f59b7-87krc 6d238840-9132-466d-ac5d-a5f996b95914 1126020 0 2020-10-14 13:43:36 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 c6c0a0af-7e08-4ffd-8f5c-31158aebb611 0x8eaac47 0x8eaac48}] [] [{kube-controller-manager Update v1 2020-10-14 13:43:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c6c0a0af-7e08-4ffd-8f5c-31158aebb611\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-14 13:43:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.133\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6skns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6skns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6skns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.2.133,StartTime:2020-10-14 13:43:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-14 13:43:48 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://acd8dc6a8271ab61680de2737d4cfcac37d0de13b4a55b8cd35991e37110d06c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.133,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 13:43:58.762: INFO: Pod "webserver-deployment-dd94f59b7-95h7d" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-95h7d webserver-deployment-dd94f59b7- deployment-2704 /api/v1/namespaces/deployment-2704/pods/webserver-deployment-dd94f59b7-95h7d 94fe2bb5-e27e-4cfb-ab09-3b566c77f004 1126015 0 2020-10-14 13:43:36 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 c6c0a0af-7e08-4ffd-8f5c-31158aebb611 0x8eaadf7 0x8eaadf8}] [] [{kube-controller-manager Update v1 2020-10-14 13:43:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c6c0a0af-7e08-4ffd-8f5c-31158aebb611\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-14 13:43:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.135\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6skns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6skns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6skns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.2.135,StartTime:2020-10-14 13:43:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-14 13:43:48 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://6ece902123ccef8726d902d22733c6ee4ca23bbb181c6ecf58c02066b3bcc8c4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.135,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 13:43:58.763: INFO: Pod "webserver-deployment-dd94f59b7-96fqp" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-96fqp webserver-deployment-dd94f59b7- deployment-2704 /api/v1/namespaces/deployment-2704/pods/webserver-deployment-dd94f59b7-96fqp 9dde452d-2a9e-4898-8630-a8c2dc4e7dd1 1126021 0 2020-10-14 13:43:36 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 c6c0a0af-7e08-4ffd-8f5c-31158aebb611 0x8eaafa7 0x8eaafa8}] [] [{kube-controller-manager Update v1 2020-10-14 13:43:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c6c0a0af-7e08-4ffd-8f5c-31158aebb611\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-14 13:43:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.124\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6skns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6skns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6skns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.1.124,StartTime:2020-10-14 13:43:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-14 13:43:48 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://94e3a451fca2e8fd2b4a4175749a9d895548ac58b6f9ee209b36a7d259fd6753,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.124,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 13:43:58.765: INFO: Pod "webserver-deployment-dd94f59b7-b47pc" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-b47pc webserver-deployment-dd94f59b7- deployment-2704 /api/v1/namespaces/deployment-2704/pods/webserver-deployment-dd94f59b7-b47pc b3911f9d-6988-47fc-8802-c6f8af149391 1125964 0 2020-10-14 13:43:36 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 c6c0a0af-7e08-4ffd-8f5c-31158aebb611 0x8eab157 0x8eab158}] [] [{kube-controller-manager Update v1 2020-10-14 13:43:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c6c0a0af-7e08-4ffd-8f5c-31158aebb611\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-14 13:43:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.131\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6skns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6skns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6skns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.2.131,StartTime:2020-10-14 13:43:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-14 13:43:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://b7953553adaca89a31f8d63689966c4a322f51b594ef8e67cf13c38cf3c3c746,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.131,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 13:43:58.766: INFO: Pod "webserver-deployment-dd94f59b7-d7629" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-d7629 webserver-deployment-dd94f59b7- deployment-2704 /api/v1/namespaces/deployment-2704/pods/webserver-deployment-dd94f59b7-d7629 2838712e-c355-478b-a8cd-04090ea27582 1126193 0 2020-10-14 13:43:53 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 c6c0a0af-7e08-4ffd-8f5c-31158aebb611 0x8eab307 0x8eab308}] [] [{kube-controller-manager Update v1 2020-10-14 13:43:53 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c6c0a0af-7e08-4ffd-8f5c-31158aebb611\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-14 13:43:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6skns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6skns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6skns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-10-14 13:43:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 13:43:58.767: INFO: Pod "webserver-deployment-dd94f59b7-fk927" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-fk927 webserver-deployment-dd94f59b7- deployment-2704 /api/v1/namespaces/deployment-2704/pods/webserver-deployment-dd94f59b7-fk927 c232abb1-4dad-4e33-9593-f92901d51909 1126016 0 2020-10-14 13:43:36 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 c6c0a0af-7e08-4ffd-8f5c-31158aebb611 0x8eab497 0x8eab498}] [] [{kube-controller-manager Update v1 2020-10-14 13:43:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c6c0a0af-7e08-4ffd-8f5c-31158aebb611\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-14 13:43:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.125\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6skns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6skns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6skns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.1.125,StartTime:2020-10-14 13:43:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-14 13:43:46 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://64b9c3af18816fc1fecbb9cc94319865d6fea3954fa743d7e06a3b71523fe7e4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.125,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 13:43:58.768: INFO: Pod "webserver-deployment-dd94f59b7-glhfq" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-glhfq webserver-deployment-dd94f59b7- deployment-2704 /api/v1/namespaces/deployment-2704/pods/webserver-deployment-dd94f59b7-glhfq c8919a25-8607-4bc6-9a74-ff2fcb47f3f3 1126179 0 2020-10-14 13:43:53 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 c6c0a0af-7e08-4ffd-8f5c-31158aebb611 0x8eab647 0x8eab648}] [] [{kube-controller-manager Update v1 2020-10-14 13:43:53 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c6c0a0af-7e08-4ffd-8f5c-31158aebb611\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-14 13:43:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6skns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6skns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6skns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-10-14 13:43:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 13:43:58.770: INFO: Pod "webserver-deployment-dd94f59b7-hmfqk" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-hmfqk webserver-deployment-dd94f59b7- deployment-2704 /api/v1/namespaces/deployment-2704/pods/webserver-deployment-dd94f59b7-hmfqk cef4e8b5-95db-4f4d-b70f-83a73c45cf70 1125989 0 2020-10-14 13:43:36 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 c6c0a0af-7e08-4ffd-8f5c-31158aebb611 0x8eab7d7 0x8eab7d8}] [] [{kube-controller-manager Update v1 2020-10-14 13:43:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c6c0a0af-7e08-4ffd-8f5c-31158aebb611\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-14 13:43:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.121\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6skns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6skns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6skns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.1.121,StartTime:2020-10-14 13:43:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-14 13:43:45 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://27a06a68c677dfd9c4c917bd5c17b3f47898a31f423418201e7914bb335bdd08,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.121,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 13:43:58.771: INFO: Pod "webserver-deployment-dd94f59b7-hxrzl" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-hxrzl webserver-deployment-dd94f59b7- deployment-2704 /api/v1/namespaces/deployment-2704/pods/webserver-deployment-dd94f59b7-hxrzl e120df94-7cb9-4601-9cc5-22770c97349d 1126225 0 2020-10-14 13:43:54 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 c6c0a0af-7e08-4ffd-8f5c-31158aebb611 0x8eab987 0x8eab988}] [] [{kube-controller-manager Update v1 2020-10-14 13:43:54 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c6c0a0af-7e08-4ffd-8f5c-31158aebb611\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-14 13:43:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6skns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6skns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6skns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-10-14 13:43:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 13:43:58.772: INFO: Pod "webserver-deployment-dd94f59b7-jjmpt" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-jjmpt webserver-deployment-dd94f59b7- deployment-2704 /api/v1/namespaces/deployment-2704/pods/webserver-deployment-dd94f59b7-jjmpt c22d06dc-8d97-4ec4-9535-63d5ab7e8eaa 1126025 0 2020-10-14 13:43:36 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 c6c0a0af-7e08-4ffd-8f5c-31158aebb611 0x8eabb27 0x8eabb28}] [] [{kube-controller-manager Update v1 2020-10-14 13:43:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c6c0a0af-7e08-4ffd-8f5c-31158aebb611\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-14 13:43:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.132\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6skns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6skns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6skns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.2.132,StartTime:2020-10-14 13:43:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-14 13:43:48 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://21dbe4d9070e890b46feab0a94966fd4f8c51ab355417f1f0915567b703b2410,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.132,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 13:43:58.774: INFO: Pod "webserver-deployment-dd94f59b7-mgz98" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-mgz98 webserver-deployment-dd94f59b7- deployment-2704 /api/v1/namespaces/deployment-2704/pods/webserver-deployment-dd94f59b7-mgz98 cf79b4a1-7180-4eb4-93ca-37890d5c8068 1126231 0 2020-10-14 13:43:54 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 c6c0a0af-7e08-4ffd-8f5c-31158aebb611 0x8eabcd7 0x8eabcd8}] [] [{kube-controller-manager Update v1 2020-10-14 13:43:54 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c6c0a0af-7e08-4ffd-8f5c-31158aebb611\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-14 13:43:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6skns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6skns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6skns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-10-14 13:43:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 13:43:58.775: INFO: Pod "webserver-deployment-dd94f59b7-mpj9t" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-mpj9t webserver-deployment-dd94f59b7- deployment-2704 /api/v1/namespaces/deployment-2704/pods/webserver-deployment-dd94f59b7-mpj9t 1866aea2-1a00-43c4-ab04-fcf166f33ef0 1126232 0 2020-10-14 13:43:54 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 c6c0a0af-7e08-4ffd-8f5c-31158aebb611 0x8eabe67 0x8eabe68}] [] [{kube-controller-manager Update v1 2020-10-14 13:43:54 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c6c0a0af-7e08-4ffd-8f5c-31158aebb611\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-14 13:43:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6skns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6skns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6skns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-10-14 13:43:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 13:43:58.777: INFO: Pod "webserver-deployment-dd94f59b7-qnpzb" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-qnpzb webserver-deployment-dd94f59b7- deployment-2704 /api/v1/namespaces/deployment-2704/pods/webserver-deployment-dd94f59b7-qnpzb 9c01955b-4f92-4fb6-8ccf-62a93c9a2bc8 1126204 0 2020-10-14 13:43:54 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 c6c0a0af-7e08-4ffd-8f5c-31158aebb611 0x8eabff7 0x8eabff8}] [] [{kube-controller-manager Update v1 2020-10-14 13:43:54 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c6c0a0af-7e08-4ffd-8f5c-31158aebb611\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-14 13:43:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6skns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6skns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6skns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-10-14 13:43:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 13:43:58.778: INFO: Pod "webserver-deployment-dd94f59b7-rd6tr" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-rd6tr webserver-deployment-dd94f59b7- deployment-2704 /api/v1/namespaces/deployment-2704/pods/webserver-deployment-dd94f59b7-rd6tr aed38459-de88-40f8-8869-e20e96fc74bd 1126228 0 2020-10-14 13:43:54 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 c6c0a0af-7e08-4ffd-8f5c-31158aebb611 0x67e2447 0x67e2448}] [] [{kube-controller-manager Update v1 2020-10-14 13:43:54 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c6c0a0af-7e08-4ffd-8f5c-31158aebb611\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-14 13:43:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6skns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6skns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6skns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-10-14 13:43:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 13:43:58.779: INFO: Pod "webserver-deployment-dd94f59b7-rsg8r" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-rsg8r webserver-deployment-dd94f59b7- deployment-2704 /api/v1/namespaces/deployment-2704/pods/webserver-deployment-dd94f59b7-rsg8r 19743d10-472c-4423-8d8d-fbb713a750e3 1126180 0 2020-10-14 13:43:53 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 c6c0a0af-7e08-4ffd-8f5c-31158aebb611 0x67e39f7 0x67e39f8}] [] [{kube-controller-manager Update v1 2020-10-14 13:43:53 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c6c0a0af-7e08-4ffd-8f5c-31158aebb611\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-14 13:43:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6skns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6skns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6skns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-10-14 13:43:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 13:43:58.780: INFO: Pod "webserver-deployment-dd94f59b7-snz5p" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-snz5p webserver-deployment-dd94f59b7- deployment-2704 /api/v1/namespaces/deployment-2704/pods/webserver-deployment-dd94f59b7-snz5p 98668362-b6a8-45d7-a1cf-b3b6f6fe7922 1126222 0 2020-10-14 13:43:54 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 c6c0a0af-7e08-4ffd-8f5c-31158aebb611 0x67e3bd7 0x67e3bd8}] [] [{kube-controller-manager Update v1 2020-10-14 13:43:54 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c6c0a0af-7e08-4ffd-8f5c-31158aebb611\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-14 13:43:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6skns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6skns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6skns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-10-14 13:43:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 13:43:58.782: INFO: Pod "webserver-deployment-dd94f59b7-v8frc" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-v8frc webserver-deployment-dd94f59b7- deployment-2704 /api/v1/namespaces/deployment-2704/pods/webserver-deployment-dd94f59b7-v8frc cbf7d38e-7457-4cce-bc85-c21d7055b0ba 1126213 0 2020-10-14 13:43:54 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 c6c0a0af-7e08-4ffd-8f5c-31158aebb611 0x67e3e37 0x67e3e38}] [] [{kube-controller-manager Update v1 2020-10-14 13:43:54 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c6c0a0af-7e08-4ffd-8f5c-31158aebb611\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-14 13:43:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6skns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6skns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6skns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-10-14 13:43:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 13:43:58.783: INFO: Pod "webserver-deployment-dd94f59b7-x46dc" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-x46dc webserver-deployment-dd94f59b7- deployment-2704 /api/v1/namespaces/deployment-2704/pods/webserver-deployment-dd94f59b7-x46dc 249d0a7a-9895-49f8-a68d-51d32451fc26 1126230 0 2020-10-14 13:43:54 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 c6c0a0af-7e08-4ffd-8f5c-31158aebb611 0x68503f7 0x68503f8}] [] [{kube-controller-manager Update v1 2020-10-14 13:43:54 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c6c0a0af-7e08-4ffd-8f5c-31158aebb611\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-14 13:43:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6skns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6skns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6skns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:43:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-10-14 13:43:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:43:58.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2704" for this suite. • [SLOW TEST:24.015 seconds] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":303,"completed":17,"skipped":239,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:44:00.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-75e57e25-5d83-495b-b3c5-b0efdff3eaba STEP: Creating a pod to test consume configMaps Oct 14 13:44:00.431: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e1332cca-959d-436b-9845-33caa868dc23" in namespace "projected-3599" to be "Succeeded or Failed" Oct 14 13:44:00.563: INFO: Pod "pod-projected-configmaps-e1332cca-959d-436b-9845-33caa868dc23": Phase="Pending", Reason="", readiness=false. Elapsed: 131.728111ms Oct 14 13:44:02.593: INFO: Pod "pod-projected-configmaps-e1332cca-959d-436b-9845-33caa868dc23": Phase="Pending", Reason="", readiness=false. Elapsed: 2.161468347s Oct 14 13:44:04.654: INFO: Pod "pod-projected-configmaps-e1332cca-959d-436b-9845-33caa868dc23": Phase="Pending", Reason="", readiness=false. Elapsed: 4.222921584s Oct 14 13:44:06.898: INFO: Pod "pod-projected-configmaps-e1332cca-959d-436b-9845-33caa868dc23": Phase="Pending", Reason="", readiness=false. Elapsed: 6.466986646s Oct 14 13:44:09.746: INFO: Pod "pod-projected-configmaps-e1332cca-959d-436b-9845-33caa868dc23": Phase="Pending", Reason="", readiness=false. Elapsed: 9.314443624s Oct 14 13:44:11.904: INFO: Pod "pod-projected-configmaps-e1332cca-959d-436b-9845-33caa868dc23": Phase="Pending", Reason="", readiness=false. Elapsed: 11.472817379s Oct 14 13:44:14.264: INFO: Pod "pod-projected-configmaps-e1332cca-959d-436b-9845-33caa868dc23": Phase="Pending", Reason="", readiness=false. Elapsed: 13.833128817s Oct 14 13:44:16.535: INFO: Pod "pod-projected-configmaps-e1332cca-959d-436b-9845-33caa868dc23": Phase="Running", Reason="", readiness=true. Elapsed: 16.103416803s Oct 14 13:44:18.555: INFO: Pod "pod-projected-configmaps-e1332cca-959d-436b-9845-33caa868dc23": Phase="Running", Reason="", readiness=true. Elapsed: 18.12350585s Oct 14 13:44:20.588: INFO: Pod "pod-projected-configmaps-e1332cca-959d-436b-9845-33caa868dc23": Phase="Running", Reason="", readiness=true. Elapsed: 20.156665331s Oct 14 13:44:22.810: INFO: Pod "pod-projected-configmaps-e1332cca-959d-436b-9845-33caa868dc23": Phase="Running", Reason="", readiness=true. Elapsed: 22.378444682s Oct 14 13:44:24.989: INFO: Pod "pod-projected-configmaps-e1332cca-959d-436b-9845-33caa868dc23": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.557833915s STEP: Saw pod success Oct 14 13:44:24.989: INFO: Pod "pod-projected-configmaps-e1332cca-959d-436b-9845-33caa868dc23" satisfied condition "Succeeded or Failed" Oct 14 13:44:24.994: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-e1332cca-959d-436b-9845-33caa868dc23 container projected-configmap-volume-test: STEP: delete the pod Oct 14 13:44:25.064: INFO: Waiting for pod pod-projected-configmaps-e1332cca-959d-436b-9845-33caa868dc23 to disappear Oct 14 13:44:25.180: INFO: Pod pod-projected-configmaps-e1332cca-959d-436b-9845-33caa868dc23 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:44:25.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3599" for this suite. • [SLOW TEST:25.194 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":303,"completed":18,"skipped":249,"failed":0} SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:44:25.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 14 13:44:25.501: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2dfa5b8e-fc1d-4cd1-af98-9a326efc1882" in namespace "downward-api-7257" to be "Succeeded or Failed" Oct 14 13:44:25.634: INFO: Pod "downwardapi-volume-2dfa5b8e-fc1d-4cd1-af98-9a326efc1882": Phase="Pending", Reason="", readiness=false. Elapsed: 133.116189ms Oct 14 13:44:27.643: INFO: Pod "downwardapi-volume-2dfa5b8e-fc1d-4cd1-af98-9a326efc1882": Phase="Pending", Reason="", readiness=false. Elapsed: 2.142013818s Oct 14 13:44:29.651: INFO: Pod "downwardapi-volume-2dfa5b8e-fc1d-4cd1-af98-9a326efc1882": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.149319269s STEP: Saw pod success Oct 14 13:44:29.651: INFO: Pod "downwardapi-volume-2dfa5b8e-fc1d-4cd1-af98-9a326efc1882" satisfied condition "Succeeded or Failed" Oct 14 13:44:29.656: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-2dfa5b8e-fc1d-4cd1-af98-9a326efc1882 container client-container: STEP: delete the pod Oct 14 13:44:29.701: INFO: Waiting for pod downwardapi-volume-2dfa5b8e-fc1d-4cd1-af98-9a326efc1882 to disappear Oct 14 13:44:29.715: INFO: Pod downwardapi-volume-2dfa5b8e-fc1d-4cd1-af98-9a326efc1882 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:44:29.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7257" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":303,"completed":19,"skipped":256,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:44:29.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 14 13:44:29.816: INFO: Waiting up to 5m0s for pod "downwardapi-volume-59126a50-bab3-48aa-a99b-77cac674f28a" in namespace "projected-8799" to be "Succeeded or Failed" Oct 14 13:44:29.829: INFO: Pod "downwardapi-volume-59126a50-bab3-48aa-a99b-77cac674f28a": Phase="Pending", Reason="", readiness=false. Elapsed: 12.848943ms Oct 14 13:44:31.836: INFO: Pod "downwardapi-volume-59126a50-bab3-48aa-a99b-77cac674f28a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019895532s Oct 14 13:44:33.844: INFO: Pod "downwardapi-volume-59126a50-bab3-48aa-a99b-77cac674f28a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027977332s STEP: Saw pod success Oct 14 13:44:33.844: INFO: Pod "downwardapi-volume-59126a50-bab3-48aa-a99b-77cac674f28a" satisfied condition "Succeeded or Failed" Oct 14 13:44:33.851: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-59126a50-bab3-48aa-a99b-77cac674f28a container client-container: STEP: delete the pod Oct 14 13:44:33.890: INFO: Waiting for pod downwardapi-volume-59126a50-bab3-48aa-a99b-77cac674f28a to disappear Oct 14 13:44:33.894: INFO: Pod downwardapi-volume-59126a50-bab3-48aa-a99b-77cac674f28a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:44:33.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8799" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":303,"completed":20,"skipped":298,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:44:33.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-feaa55cd-13f0-4fa5-b539-944dd2cf1aed STEP: Creating a pod to test consume configMaps Oct 14 13:44:33.985: INFO: Waiting up to 5m0s for pod "pod-configmaps-e0dbe281-7e1d-4495-8d03-37d91c346543" in namespace "configmap-5740" to be "Succeeded or Failed" Oct 14 13:44:34.013: INFO: Pod "pod-configmaps-e0dbe281-7e1d-4495-8d03-37d91c346543": Phase="Pending", Reason="", readiness=false. Elapsed: 27.379392ms Oct 14 13:44:36.021: INFO: Pod "pod-configmaps-e0dbe281-7e1d-4495-8d03-37d91c346543": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0354976s Oct 14 13:44:38.033: INFO: Pod "pod-configmaps-e0dbe281-7e1d-4495-8d03-37d91c346543": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047070654s STEP: Saw pod success Oct 14 13:44:38.033: INFO: Pod "pod-configmaps-e0dbe281-7e1d-4495-8d03-37d91c346543" satisfied condition "Succeeded or Failed" Oct 14 13:44:38.038: INFO: Trying to get logs from node latest-worker pod pod-configmaps-e0dbe281-7e1d-4495-8d03-37d91c346543 container configmap-volume-test: STEP: delete the pod Oct 14 13:44:38.056: INFO: Waiting for pod pod-configmaps-e0dbe281-7e1d-4495-8d03-37d91c346543 to disappear Oct 14 13:44:38.060: INFO: Pod pod-configmaps-e0dbe281-7e1d-4495-8d03-37d91c346543 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:44:38.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5740" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":21,"skipped":315,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:44:38.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-7068 [It] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet Oct 14 13:44:38.245: INFO: Found 0 stateful pods, waiting for 3 Oct 14 13:44:48.255: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Oct 14 13:44:48.255: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Oct 14 13:44:48.255: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Oct 14 13:44:58.257: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Oct 14 13:44:58.257: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Oct 14 13:44:58.258: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Oct 14 13:44:58.281: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7068 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 14 13:45:00.118: INFO: stderr: "I1014 13:44:59.969918 135 log.go:181] (0x2d9e000) (0x2d9e070) Create stream\nI1014 13:44:59.973310 135 log.go:181] (0x2d9e000) (0x2d9e070) Stream added, broadcasting: 1\nI1014 13:44:59.986247 135 log.go:181] (0x2d9e000) Reply frame received for 1\nI1014 13:44:59.987164 135 log.go:181] (0x2d9e000) (0x29f6f50) Create stream\nI1014 13:44:59.987290 135 log.go:181] (0x2d9e000) (0x29f6f50) Stream added, broadcasting: 3\nI1014 13:44:59.989341 135 log.go:181] (0x2d9e000) Reply frame received for 3\nI1014 13:44:59.989687 135 log.go:181] (0x2d9e000) (0x29f7110) Create stream\nI1014 13:44:59.989765 135 log.go:181] (0x2d9e000) (0x29f7110) Stream added, broadcasting: 5\nI1014 13:44:59.991205 135 log.go:181] (0x2d9e000) Reply frame received for 5\nI1014 13:45:00.039337 135 log.go:181] (0x2d9e000) Data frame received for 5\nI1014 13:45:00.039506 135 log.go:181] (0x29f7110) (5) Data frame handling\nI1014 13:45:00.039800 135 log.go:181] (0x29f7110) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1014 13:45:00.097228 135 log.go:181] (0x2d9e000) Data frame received for 3\nI1014 13:45:00.097394 135 log.go:181] (0x29f6f50) (3) Data frame handling\nI1014 13:45:00.097504 135 log.go:181] (0x29f6f50) (3) Data frame sent\nI1014 13:45:00.097603 135 log.go:181] (0x2d9e000) Data frame received for 5\nI1014 13:45:00.097691 135 log.go:181] (0x29f7110) (5) Data frame handling\nI1014 13:45:00.097793 135 log.go:181] (0x2d9e000) Data frame received for 3\nI1014 13:45:00.097903 135 log.go:181] (0x29f6f50) (3) Data frame handling\nI1014 13:45:00.099798 135 log.go:181] (0x2d9e000) Data frame received for 1\nI1014 13:45:00.099862 135 log.go:181] (0x2d9e070) (1) Data frame handling\nI1014 13:45:00.099940 135 log.go:181] (0x2d9e070) (1) Data frame sent\nI1014 13:45:00.100565 135 log.go:181] (0x2d9e000) (0x2d9e070) Stream removed, broadcasting: 1\nI1014 13:45:00.102719 135 log.go:181] (0x2d9e000) Go away received\nI1014 13:45:00.107247 135 log.go:181] (0x2d9e000) (0x2d9e070) Stream removed, broadcasting: 1\nI1014 13:45:00.107670 135 log.go:181] (0x2d9e000) (0x29f6f50) Stream removed, broadcasting: 3\nI1014 13:45:00.107960 135 log.go:181] (0x2d9e000) (0x29f7110) Stream removed, broadcasting: 5\n" Oct 14 13:45:00.119: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 14 13:45:00.119: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Oct 14 13:45:10.179: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Oct 14 13:45:20.244: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7068 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 13:45:21.715: INFO: stderr: "I1014 13:45:21.594432 155 log.go:181] (0x2d98000) (0x2d98070) Create stream\nI1014 13:45:21.598233 155 log.go:181] (0x2d98000) (0x2d98070) Stream added, broadcasting: 1\nI1014 13:45:21.608384 155 log.go:181] (0x2d98000) Reply frame received for 1\nI1014 13:45:21.608945 155 log.go:181] (0x2d98000) (0x25cc310) Create stream\nI1014 13:45:21.609005 155 log.go:181] (0x2d98000) (0x25cc310) Stream added, broadcasting: 3\nI1014 13:45:21.610490 155 log.go:181] (0x2d98000) Reply frame received for 3\nI1014 13:45:21.610747 155 log.go:181] (0x2d98000) (0x25cc5b0) Create stream\nI1014 13:45:21.610814 155 log.go:181] (0x2d98000) (0x25cc5b0) Stream added, broadcasting: 5\nI1014 13:45:21.612163 155 log.go:181] (0x2d98000) Reply frame received for 5\nI1014 13:45:21.694781 155 log.go:181] (0x2d98000) Data frame received for 5\nI1014 13:45:21.695097 155 log.go:181] (0x2d98000) Data frame received for 3\nI1014 13:45:21.695218 155 log.go:181] (0x25cc310) (3) Data frame handling\nI1014 13:45:21.695321 155 log.go:181] (0x25cc5b0) (5) Data frame handling\nI1014 13:45:21.695726 155 log.go:181] (0x25cc310) (3) Data frame sent\nI1014 13:45:21.695912 155 log.go:181] (0x25cc5b0) (5) Data frame sent\nI1014 13:45:21.696214 155 log.go:181] (0x2d98000) Data frame received for 3\nI1014 13:45:21.696334 155 log.go:181] (0x2d98000) Data frame received for 1\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI1014 13:45:21.696570 155 log.go:181] (0x2d98000) Data frame received for 5\nI1014 13:45:21.697253 155 log.go:181] (0x25cc5b0) (5) Data frame handling\nI1014 13:45:21.697428 155 log.go:181] (0x2d98070) (1) Data frame handling\nI1014 13:45:21.697616 155 log.go:181] (0x25cc310) (3) Data frame handling\nI1014 13:45:21.697808 155 log.go:181] (0x2d98070) (1) Data frame sent\nI1014 13:45:21.699935 155 log.go:181] (0x2d98000) (0x2d98070) Stream removed, broadcasting: 1\nI1014 13:45:21.701432 155 log.go:181] (0x2d98000) Go away received\nI1014 13:45:21.704960 155 log.go:181] (0x2d98000) (0x2d98070) Stream removed, broadcasting: 1\nI1014 13:45:21.705153 155 log.go:181] (0x2d98000) (0x25cc310) Stream removed, broadcasting: 3\nI1014 13:45:21.705313 155 log.go:181] (0x2d98000) (0x25cc5b0) Stream removed, broadcasting: 5\n" Oct 14 13:45:21.715: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 14 13:45:21.716: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 14 13:45:31.753: INFO: Waiting for StatefulSet statefulset-7068/ss2 to complete update Oct 14 13:45:31.753: INFO: Waiting for Pod statefulset-7068/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Oct 14 13:45:31.753: INFO: Waiting for Pod statefulset-7068/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Oct 14 13:45:31.753: INFO: Waiting for Pod statefulset-7068/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Oct 14 13:45:41.952: INFO: Waiting for StatefulSet statefulset-7068/ss2 to complete update Oct 14 13:45:41.952: INFO: Waiting for Pod statefulset-7068/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Oct 14 13:45:41.952: INFO: Waiting for Pod statefulset-7068/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Oct 14 13:45:51.769: INFO: Waiting for StatefulSet statefulset-7068/ss2 to complete update Oct 14 13:45:51.769: INFO: Waiting for Pod statefulset-7068/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Oct 14 13:46:01.767: INFO: Waiting for StatefulSet statefulset-7068/ss2 to complete update Oct 14 13:46:01.768: INFO: Waiting for Pod statefulset-7068/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision Oct 14 13:46:11.770: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7068 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 14 13:46:16.659: INFO: stderr: "I1014 13:46:16.463907 175 log.go:181] (0x25d3ab0) (0x25d3b90) Create stream\nI1014 13:46:16.466987 175 log.go:181] (0x25d3ab0) (0x25d3b90) Stream added, broadcasting: 1\nI1014 13:46:16.487343 175 log.go:181] (0x25d3ab0) Reply frame received for 1\nI1014 13:46:16.487825 175 log.go:181] (0x25d3ab0) (0x26fc0e0) Create stream\nI1014 13:46:16.487887 175 log.go:181] (0x25d3ab0) (0x26fc0e0) Stream added, broadcasting: 3\nI1014 13:46:16.489410 175 log.go:181] (0x25d3ab0) Reply frame received for 3\nI1014 13:46:16.489679 175 log.go:181] (0x25d3ab0) (0x26fc2a0) Create stream\nI1014 13:46:16.489749 175 log.go:181] (0x25d3ab0) (0x26fc2a0) Stream added, broadcasting: 5\nI1014 13:46:16.490968 175 log.go:181] (0x25d3ab0) Reply frame received for 5\nI1014 13:46:16.566171 175 log.go:181] (0x25d3ab0) Data frame received for 5\nI1014 13:46:16.566432 175 log.go:181] (0x26fc2a0) (5) Data frame handling\nI1014 13:46:16.566963 175 log.go:181] (0x26fc2a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1014 13:46:16.642503 175 log.go:181] (0x25d3ab0) Data frame received for 3\nI1014 13:46:16.642716 175 log.go:181] (0x26fc0e0) (3) Data frame handling\nI1014 13:46:16.642974 175 log.go:181] (0x25d3ab0) Data frame received for 5\nI1014 13:46:16.643221 175 log.go:181] (0x26fc2a0) (5) Data frame handling\nI1014 13:46:16.644248 175 log.go:181] (0x26fc0e0) (3) Data frame sent\nI1014 13:46:16.644388 175 log.go:181] (0x25d3ab0) Data frame received for 3\nI1014 13:46:16.644481 175 log.go:181] (0x26fc0e0) (3) Data frame handling\nI1014 13:46:16.645854 175 log.go:181] (0x25d3ab0) Data frame received for 1\nI1014 13:46:16.646076 175 log.go:181] (0x25d3b90) (1) Data frame handling\nI1014 13:46:16.646213 175 log.go:181] (0x25d3b90) (1) Data frame sent\nI1014 13:46:16.646677 175 log.go:181] (0x25d3ab0) (0x25d3b90) Stream removed, broadcasting: 1\nI1014 13:46:16.647030 175 log.go:181] (0x25d3ab0) Go away received\nI1014 13:46:16.650274 175 log.go:181] (0x25d3ab0) (0x25d3b90) Stream removed, broadcasting: 1\nI1014 13:46:16.650479 175 log.go:181] (0x25d3ab0) (0x26fc0e0) Stream removed, broadcasting: 3\nI1014 13:46:16.650646 175 log.go:181] (0x25d3ab0) (0x26fc2a0) Stream removed, broadcasting: 5\n" Oct 14 13:46:16.660: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 14 13:46:16.661: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 14 13:46:26.712: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Oct 14 13:46:37.245: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7068 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 13:46:42.982: INFO: stderr: "I1014 13:46:42.878649 196 log.go:181] (0x2975110) (0x2975180) Create stream\nI1014 13:46:42.880645 196 log.go:181] (0x2975110) (0x2975180) Stream added, broadcasting: 1\nI1014 13:46:42.890733 196 log.go:181] (0x2975110) Reply frame received for 1\nI1014 13:46:42.891592 196 log.go:181] (0x2975110) (0x28ff180) Create stream\nI1014 13:46:42.891743 196 log.go:181] (0x2975110) (0x28ff180) Stream added, broadcasting: 3\nI1014 13:46:42.893795 196 log.go:181] (0x2975110) Reply frame received for 3\nI1014 13:46:42.894144 196 log.go:181] (0x2975110) (0x247caf0) Create stream\nI1014 13:46:42.894228 196 log.go:181] (0x2975110) (0x247caf0) Stream added, broadcasting: 5\nI1014 13:46:42.895485 196 log.go:181] (0x2975110) Reply frame received for 5\nI1014 13:46:42.961801 196 log.go:181] (0x2975110) Data frame received for 3\nI1014 13:46:42.962064 196 log.go:181] (0x2975110) Data frame received for 1\nI1014 13:46:42.962330 196 log.go:181] (0x28ff180) (3) Data frame handling\nI1014 13:46:42.962583 196 log.go:181] (0x2975110) Data frame received for 5\nI1014 13:46:42.962720 196 log.go:181] (0x247caf0) (5) Data frame handling\nI1014 13:46:42.962921 196 log.go:181] (0x2975180) (1) Data frame handling\nI1014 13:46:42.963509 196 log.go:181] (0x2975180) (1) Data frame sent\nI1014 13:46:42.963852 196 log.go:181] (0x247caf0) (5) Data frame sent\nI1014 13:46:42.964735 196 log.go:181] (0x2975110) Data frame received for 5\nI1014 13:46:42.964945 196 log.go:181] (0x247caf0) (5) Data frame handling\nI1014 13:46:42.965163 196 log.go:181] (0x28ff180) (3) Data frame sent\nI1014 13:46:42.965282 196 log.go:181] (0x2975110) Data frame received for 3\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI1014 13:46:42.965608 196 log.go:181] (0x2975110) (0x2975180) Stream removed, broadcasting: 1\nI1014 13:46:42.966749 196 log.go:181] (0x28ff180) (3) Data frame handling\nI1014 13:46:42.968920 196 log.go:181] (0x2975110) Go away received\nI1014 13:46:42.971742 196 log.go:181] (0x2975110) (0x2975180) Stream removed, broadcasting: 1\nI1014 13:46:42.971953 196 log.go:181] (0x2975110) (0x28ff180) Stream removed, broadcasting: 3\nI1014 13:46:42.972164 196 log.go:181] (0x2975110) (0x247caf0) Stream removed, broadcasting: 5\n" Oct 14 13:46:42.982: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 14 13:46:42.982: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 14 13:46:53.024: INFO: Waiting for StatefulSet statefulset-7068/ss2 to complete update Oct 14 13:46:53.024: INFO: Waiting for Pod statefulset-7068/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Oct 14 13:46:53.024: INFO: Waiting for Pod statefulset-7068/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Oct 14 13:46:53.024: INFO: Waiting for Pod statefulset-7068/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Oct 14 13:47:03.148: INFO: Waiting for StatefulSet statefulset-7068/ss2 to complete update Oct 14 13:47:03.148: INFO: Waiting for Pod statefulset-7068/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Oct 14 13:47:03.148: INFO: Waiting for Pod statefulset-7068/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Oct 14 13:47:13.042: INFO: Waiting for StatefulSet statefulset-7068/ss2 to complete update Oct 14 13:47:13.042: INFO: Waiting for Pod statefulset-7068/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Oct 14 13:47:13.042: INFO: Waiting for Pod statefulset-7068/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Oct 14 13:47:23.037: INFO: Waiting for StatefulSet statefulset-7068/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Oct 14 13:47:33.040: INFO: Deleting all statefulset in ns statefulset-7068 Oct 14 13:47:33.048: INFO: Scaling statefulset ss2 to 0 Oct 14 13:48:13.109: INFO: Waiting for statefulset status.replicas updated to 0 Oct 14 13:48:13.115: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:48:13.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7068" for this suite. • [SLOW TEST:215.076 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":303,"completed":22,"skipped":343,"failed":0} [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:48:13.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if v1 is in available api versions [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating api versions Oct 14 13:48:13.224: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config api-versions' Oct 14 13:48:14.466: INFO: stderr: "" Oct 14 13:48:14.466: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:48:14.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7534" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":303,"completed":23,"skipped":343,"failed":0} S ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:48:14.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service multi-endpoint-test in namespace services-7524 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7524 to expose endpoints map[] Oct 14 13:48:14.712: INFO: successfully validated that service multi-endpoint-test in namespace services-7524 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-7524 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7524 to expose endpoints map[pod1:[100]] Oct 14 13:48:17.809: INFO: successfully validated that service multi-endpoint-test in namespace services-7524 exposes endpoints map[pod1:[100]] STEP: Creating pod pod2 in namespace services-7524 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7524 to expose endpoints map[pod1:[100] pod2:[101]] Oct 14 13:48:21.900: INFO: Unexpected endpoints: found map[c733dfc2-1074-4ef0-be63-8ebe3bcb3aad:[100]], expected map[pod1:[100] pod2:[101]], will retry Oct 14 13:48:22.907: INFO: successfully validated that service multi-endpoint-test in namespace services-7524 exposes endpoints map[pod1:[100] pod2:[101]] STEP: Deleting pod pod1 in namespace services-7524 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7524 to expose endpoints map[pod2:[101]] Oct 14 13:48:22.971: INFO: successfully validated that service multi-endpoint-test in namespace services-7524 exposes endpoints map[pod2:[101]] STEP: Deleting pod pod2 in namespace services-7524 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7524 to expose endpoints map[] Oct 14 13:48:24.000: INFO: successfully validated that service multi-endpoint-test in namespace services-7524 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:48:24.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7524" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:9.578 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":303,"completed":24,"skipped":344,"failed":0} S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:48:24.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-downwardapi-826x STEP: Creating a pod to test atomic-volume-subpath Oct 14 13:48:24.377: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-826x" in namespace "subpath-6960" to be "Succeeded or Failed" Oct 14 13:48:24.426: INFO: Pod "pod-subpath-test-downwardapi-826x": Phase="Pending", Reason="", readiness=false. Elapsed: 48.567271ms Oct 14 13:48:26.433: INFO: Pod "pod-subpath-test-downwardapi-826x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055642124s Oct 14 13:48:28.442: INFO: Pod "pod-subpath-test-downwardapi-826x": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064597486s Oct 14 13:48:30.450: INFO: Pod "pod-subpath-test-downwardapi-826x": Phase="Running", Reason="", readiness=true. Elapsed: 6.072208533s Oct 14 13:48:32.459: INFO: Pod "pod-subpath-test-downwardapi-826x": Phase="Running", Reason="", readiness=true. Elapsed: 8.081277529s Oct 14 13:48:34.465: INFO: Pod "pod-subpath-test-downwardapi-826x": Phase="Running", Reason="", readiness=true. Elapsed: 10.087013949s Oct 14 13:48:36.472: INFO: Pod "pod-subpath-test-downwardapi-826x": Phase="Running", Reason="", readiness=true. Elapsed: 12.09439363s Oct 14 13:48:38.480: INFO: Pod "pod-subpath-test-downwardapi-826x": Phase="Running", Reason="", readiness=true. Elapsed: 14.102749219s Oct 14 13:48:40.489: INFO: Pod "pod-subpath-test-downwardapi-826x": Phase="Running", Reason="", readiness=true. Elapsed: 16.111519136s Oct 14 13:48:42.498: INFO: Pod "pod-subpath-test-downwardapi-826x": Phase="Running", Reason="", readiness=true. Elapsed: 18.120363078s Oct 14 13:48:44.505: INFO: Pod "pod-subpath-test-downwardapi-826x": Phase="Running", Reason="", readiness=true. Elapsed: 20.127792943s Oct 14 13:48:46.513: INFO: Pod "pod-subpath-test-downwardapi-826x": Phase="Running", Reason="", readiness=true. Elapsed: 22.135356979s Oct 14 13:48:48.522: INFO: Pod "pod-subpath-test-downwardapi-826x": Phase="Running", Reason="", readiness=true. Elapsed: 24.144068366s Oct 14 13:48:50.528: INFO: Pod "pod-subpath-test-downwardapi-826x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.150621314s STEP: Saw pod success Oct 14 13:48:50.529: INFO: Pod "pod-subpath-test-downwardapi-826x" satisfied condition "Succeeded or Failed" Oct 14 13:48:50.533: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-downwardapi-826x container test-container-subpath-downwardapi-826x: STEP: delete the pod Oct 14 13:48:50.581: INFO: Waiting for pod pod-subpath-test-downwardapi-826x to disappear Oct 14 13:48:50.592: INFO: Pod pod-subpath-test-downwardapi-826x no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-826x Oct 14 13:48:50.592: INFO: Deleting pod "pod-subpath-test-downwardapi-826x" in namespace "subpath-6960" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:48:50.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6960" for this suite. • [SLOW TEST:26.560 seconds] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":303,"completed":25,"skipped":345,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:48:50.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 14 13:48:50.737: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bc94a1ad-a2a7-4794-a486-8c7dfad98c07" in namespace "downward-api-5873" to be "Succeeded or Failed" Oct 14 13:48:50.759: INFO: Pod "downwardapi-volume-bc94a1ad-a2a7-4794-a486-8c7dfad98c07": Phase="Pending", Reason="", readiness=false. Elapsed: 21.277054ms Oct 14 13:48:52.791: INFO: Pod "downwardapi-volume-bc94a1ad-a2a7-4794-a486-8c7dfad98c07": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053393622s Oct 14 13:48:54.802: INFO: Pod "downwardapi-volume-bc94a1ad-a2a7-4794-a486-8c7dfad98c07": Phase="Running", Reason="", readiness=true. Elapsed: 4.064741486s Oct 14 13:48:56.809: INFO: Pod "downwardapi-volume-bc94a1ad-a2a7-4794-a486-8c7dfad98c07": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.071324217s STEP: Saw pod success Oct 14 13:48:56.809: INFO: Pod "downwardapi-volume-bc94a1ad-a2a7-4794-a486-8c7dfad98c07" satisfied condition "Succeeded or Failed" Oct 14 13:48:56.813: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-bc94a1ad-a2a7-4794-a486-8c7dfad98c07 container client-container: STEP: delete the pod Oct 14 13:48:56.847: INFO: Waiting for pod downwardapi-volume-bc94a1ad-a2a7-4794-a486-8c7dfad98c07 to disappear Oct 14 13:48:56.859: INFO: Pod downwardapi-volume-bc94a1ad-a2a7-4794-a486-8c7dfad98c07 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:48:56.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5873" for this suite. • [SLOW TEST:6.247 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":26,"skipped":354,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:48:56.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-5638 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating stateful set ss in namespace statefulset-5638 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5638 Oct 14 13:48:57.083: INFO: Found 0 stateful pods, waiting for 1 Oct 14 13:49:07.103: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Oct 14 13:49:07.110: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5638 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 14 13:49:08.629: INFO: stderr: "I1014 13:49:08.476167 237 log.go:181] (0x300a000) (0x300a070) Create stream\nI1014 13:49:08.478187 237 log.go:181] (0x300a000) (0x300a070) Stream added, broadcasting: 1\nI1014 13:49:08.488169 237 log.go:181] (0x300a000) Reply frame received for 1\nI1014 13:49:08.489267 237 log.go:181] (0x300a000) (0x2a18540) Create stream\nI1014 13:49:08.489420 237 log.go:181] (0x300a000) (0x2a18540) Stream added, broadcasting: 3\nI1014 13:49:08.491527 237 log.go:181] (0x300a000) Reply frame received for 3\nI1014 13:49:08.491910 237 log.go:181] (0x300a000) (0x300a230) Create stream\nI1014 13:49:08.491999 237 log.go:181] (0x300a000) (0x300a230) Stream added, broadcasting: 5\nI1014 13:49:08.493548 237 log.go:181] (0x300a000) Reply frame received for 5\nI1014 13:49:08.572206 237 log.go:181] (0x300a000) Data frame received for 5\nI1014 13:49:08.572605 237 log.go:181] (0x300a230) (5) Data frame handling\nI1014 13:49:08.573397 237 log.go:181] (0x300a230) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1014 13:49:08.611228 237 log.go:181] (0x300a000) Data frame received for 3\nI1014 13:49:08.611411 237 log.go:181] (0x2a18540) (3) Data frame handling\nI1014 13:49:08.611540 237 log.go:181] (0x2a18540) (3) Data frame sent\nI1014 13:49:08.611668 237 log.go:181] (0x300a000) Data frame received for 5\nI1014 13:49:08.611757 237 log.go:181] (0x300a230) (5) Data frame handling\nI1014 13:49:08.612284 237 log.go:181] (0x300a000) Data frame received for 3\nI1014 13:49:08.612481 237 log.go:181] (0x2a18540) (3) Data frame handling\nI1014 13:49:08.614283 237 log.go:181] (0x300a000) Data frame received for 1\nI1014 13:49:08.614380 237 log.go:181] (0x300a070) (1) Data frame handling\nI1014 13:49:08.614483 237 log.go:181] (0x300a070) (1) Data frame sent\nI1014 13:49:08.615691 237 log.go:181] (0x300a000) (0x300a070) Stream removed, broadcasting: 1\nI1014 13:49:08.618183 237 log.go:181] (0x300a000) Go away received\nI1014 13:49:08.620417 237 log.go:181] (0x300a000) (0x300a070) Stream removed, broadcasting: 1\nI1014 13:49:08.620667 237 log.go:181] (0x300a000) (0x2a18540) Stream removed, broadcasting: 3\nI1014 13:49:08.620812 237 log.go:181] (0x300a000) (0x300a230) Stream removed, broadcasting: 5\n" Oct 14 13:49:08.630: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 14 13:49:08.630: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 14 13:49:08.637: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Oct 14 13:49:18.647: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Oct 14 13:49:18.647: INFO: Waiting for statefulset status.replicas updated to 0 Oct 14 13:49:18.712: INFO: POD NODE PHASE GRACE CONDITIONS Oct 14 13:49:18.714: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 13:48:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 13:49:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 13:49:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 13:48:57 +0000 UTC }] Oct 14 13:49:18.714: INFO: Oct 14 13:49:18.714: INFO: StatefulSet ss has not reached scale 3, at 1 Oct 14 13:49:19.727: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.969839318s Oct 14 13:49:21.176: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.95699237s Oct 14 13:49:22.206: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.507529649s Oct 14 13:49:23.215: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.478092022s Oct 14 13:49:24.225: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.468762013s Oct 14 13:49:25.237: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.459313507s Oct 14 13:49:26.249: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.446415214s Oct 14 13:49:27.267: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.435548996s Oct 14 13:49:28.278: INFO: Verifying statefulset ss doesn't scale past 3 for another 416.834435ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5638 Oct 14 13:49:29.301: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5638 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 13:49:30.856: INFO: stderr: "I1014 13:49:30.740509 257 log.go:181] (0x2a88150) (0x2a88460) Create stream\nI1014 13:49:30.743622 257 log.go:181] (0x2a88150) (0x2a88460) Stream added, broadcasting: 1\nI1014 13:49:30.753886 257 log.go:181] (0x2a88150) Reply frame received for 1\nI1014 13:49:30.754802 257 log.go:181] (0x2a88150) (0x29c2310) Create stream\nI1014 13:49:30.754919 257 log.go:181] (0x2a88150) (0x29c2310) Stream added, broadcasting: 3\nI1014 13:49:30.756934 257 log.go:181] (0x2a88150) Reply frame received for 3\nI1014 13:49:30.757203 257 log.go:181] (0x2a88150) (0x29c24d0) Create stream\nI1014 13:49:30.757269 257 log.go:181] (0x2a88150) (0x29c24d0) Stream added, broadcasting: 5\nI1014 13:49:30.758726 257 log.go:181] (0x2a88150) Reply frame received for 5\nI1014 13:49:30.840741 257 log.go:181] (0x2a88150) Data frame received for 3\nI1014 13:49:30.841160 257 log.go:181] (0x29c2310) (3) Data frame handling\nI1014 13:49:30.841456 257 log.go:181] (0x2a88150) Data frame received for 1\nI1014 13:49:30.841611 257 log.go:181] (0x2a88460) (1) Data frame handling\nI1014 13:49:30.841858 257 log.go:181] (0x2a88150) Data frame received for 5\nI1014 13:49:30.842046 257 log.go:181] (0x29c24d0) (5) Data frame handling\nI1014 13:49:30.842219 257 log.go:181] (0x29c24d0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI1014 13:49:30.842438 257 log.go:181] (0x2a88460) (1) Data frame sent\nI1014 13:49:30.842730 257 log.go:181] (0x29c2310) (3) Data frame sent\nI1014 13:49:30.842830 257 log.go:181] (0x2a88150) Data frame received for 3\nI1014 13:49:30.842895 257 log.go:181] (0x29c2310) (3) Data frame handling\nI1014 13:49:30.843308 257 log.go:181] (0x2a88150) Data frame received for 5\nI1014 13:49:30.843459 257 log.go:181] (0x29c24d0) (5) Data frame handling\nI1014 13:49:30.844688 257 log.go:181] (0x2a88150) (0x2a88460) Stream removed, broadcasting: 1\nI1014 13:49:30.847030 257 log.go:181] (0x2a88150) Go away received\nI1014 13:49:30.848779 257 log.go:181] (0x2a88150) (0x2a88460) Stream removed, broadcasting: 1\nI1014 13:49:30.849126 257 log.go:181] (0x2a88150) (0x29c2310) Stream removed, broadcasting: 3\nI1014 13:49:30.849260 257 log.go:181] (0x2a88150) (0x29c24d0) Stream removed, broadcasting: 5\n" Oct 14 13:49:30.857: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 14 13:49:30.857: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 14 13:49:30.857: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5638 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 13:49:32.448: INFO: stderr: "I1014 13:49:32.258765 277 log.go:181] (0x27b30a0) (0x27b31f0) Create stream\nI1014 13:49:32.260543 277 log.go:181] (0x27b30a0) (0x27b31f0) Stream added, broadcasting: 1\nI1014 13:49:32.267706 277 log.go:181] (0x27b30a0) Reply frame received for 1\nI1014 13:49:32.268274 277 log.go:181] (0x27b30a0) (0x25895e0) Create stream\nI1014 13:49:32.268349 277 log.go:181] (0x27b30a0) (0x25895e0) Stream added, broadcasting: 3\nI1014 13:49:32.270106 277 log.go:181] (0x27b30a0) Reply frame received for 3\nI1014 13:49:32.270602 277 log.go:181] (0x27b30a0) (0x257a000) Create stream\nI1014 13:49:32.270728 277 log.go:181] (0x27b30a0) (0x257a000) Stream added, broadcasting: 5\nI1014 13:49:32.272447 277 log.go:181] (0x27b30a0) Reply frame received for 5\nI1014 13:49:32.377188 277 log.go:181] (0x27b30a0) Data frame received for 3\nI1014 13:49:32.377614 277 log.go:181] (0x25895e0) (3) Data frame handling\nI1014 13:49:32.378015 277 log.go:181] (0x27b30a0) Data frame received for 5\nI1014 13:49:32.378177 277 log.go:181] (0x257a000) (5) Data frame handling\nI1014 13:49:32.378384 277 log.go:181] (0x25895e0) (3) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI1014 13:49:32.384719 277 log.go:181] (0x257a000) (5) Data frame sent\nI1014 13:49:32.385241 277 log.go:181] (0x27b30a0) Data frame received for 5\nI1014 13:49:32.385327 277 log.go:181] (0x257a000) (5) Data frame handling\nI1014 13:49:32.408229 277 log.go:181] (0x27b30a0) Data frame received for 1\nI1014 13:49:32.408317 277 log.go:181] (0x27b31f0) (1) Data frame handling\nI1014 13:49:32.409218 277 log.go:181] (0x27b31f0) (1) Data frame sent\nI1014 13:49:32.412141 277 log.go:181] (0x27b30a0) (0x27b31f0) Stream removed, broadcasting: 1\nI1014 13:49:32.418267 277 log.go:181] (0x27b30a0) Data frame received for 3\nI1014 13:49:32.418388 277 log.go:181] (0x25895e0) (3) Data frame handling\nI1014 13:49:32.429269 277 log.go:181] (0x27b30a0) Go away received\nI1014 13:49:32.440635 277 log.go:181] (0x27b30a0) (0x27b31f0) Stream removed, broadcasting: 1\nI1014 13:49:32.441011 277 log.go:181] (0x27b30a0) (0x25895e0) Stream removed, broadcasting: 3\nI1014 13:49:32.441183 277 log.go:181] (0x27b30a0) (0x257a000) Stream removed, broadcasting: 5\n" Oct 14 13:49:32.449: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 14 13:49:32.449: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 14 13:49:32.449: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5638 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 13:49:33.949: INFO: stderr: "I1014 13:49:33.828942 298 log.go:181] (0x27327e0) (0x27329a0) Create stream\nI1014 13:49:33.832980 298 log.go:181] (0x27327e0) (0x27329a0) Stream added, broadcasting: 1\nI1014 13:49:33.844269 298 log.go:181] (0x27327e0) Reply frame received for 1\nI1014 13:49:33.844963 298 log.go:181] (0x27327e0) (0x2732cb0) Create stream\nI1014 13:49:33.845050 298 log.go:181] (0x27327e0) (0x2732cb0) Stream added, broadcasting: 3\nI1014 13:49:33.846597 298 log.go:181] (0x27327e0) Reply frame received for 3\nI1014 13:49:33.846839 298 log.go:181] (0x27327e0) (0x2732f50) Create stream\nI1014 13:49:33.846896 298 log.go:181] (0x27327e0) (0x2732f50) Stream added, broadcasting: 5\nI1014 13:49:33.847829 298 log.go:181] (0x27327e0) Reply frame received for 5\nI1014 13:49:33.928268 298 log.go:181] (0x27327e0) Data frame received for 3\nI1014 13:49:33.928456 298 log.go:181] (0x27327e0) Data frame received for 5\nI1014 13:49:33.928663 298 log.go:181] (0x2732f50) (5) Data frame handling\nI1014 13:49:33.929552 298 log.go:181] (0x2732cb0) (3) Data frame handling\nI1014 13:49:33.930001 298 log.go:181] (0x27327e0) Data frame received for 1\nI1014 13:49:33.930094 298 log.go:181] (0x27329a0) (1) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI1014 13:49:33.930758 298 log.go:181] (0x2732f50) (5) Data frame sent\nI1014 13:49:33.930914 298 log.go:181] (0x2732cb0) (3) Data frame sent\nI1014 13:49:33.931019 298 log.go:181] (0x27329a0) (1) Data frame sent\nI1014 13:49:33.931372 298 log.go:181] (0x27327e0) Data frame received for 5\nI1014 13:49:33.931480 298 log.go:181] (0x2732f50) (5) Data frame handling\nI1014 13:49:33.931575 298 log.go:181] (0x27327e0) Data frame received for 3\nI1014 13:49:33.931733 298 log.go:181] (0x27327e0) (0x27329a0) Stream removed, broadcasting: 1\nI1014 13:49:33.932268 298 log.go:181] (0x2732cb0) (3) Data frame handling\nI1014 13:49:33.935650 298 log.go:181] (0x27327e0) Go away received\nI1014 13:49:33.937863 298 log.go:181] (0x27327e0) (0x27329a0) Stream removed, broadcasting: 1\nI1014 13:49:33.938531 298 log.go:181] (0x27327e0) (0x2732cb0) Stream removed, broadcasting: 3\nI1014 13:49:33.938758 298 log.go:181] (0x27327e0) (0x2732f50) Stream removed, broadcasting: 5\n" Oct 14 13:49:33.949: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 14 13:49:33.950: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 14 13:49:33.959: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Oct 14 13:49:33.960: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Oct 14 13:49:33.960: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Oct 14 13:49:33.968: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5638 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 14 13:49:35.558: INFO: stderr: "I1014 13:49:35.449185 318 log.go:181] (0x290ad20) (0x290ad90) Create stream\nI1014 13:49:35.451908 318 log.go:181] (0x290ad20) (0x290ad90) Stream added, broadcasting: 1\nI1014 13:49:35.463331 318 log.go:181] (0x290ad20) Reply frame received for 1\nI1014 13:49:35.464249 318 log.go:181] (0x290ad20) (0x2fa0070) Create stream\nI1014 13:49:35.464361 318 log.go:181] (0x290ad20) (0x2fa0070) Stream added, broadcasting: 3\nI1014 13:49:35.466422 318 log.go:181] (0x290ad20) Reply frame received for 3\nI1014 13:49:35.466833 318 log.go:181] (0x290ad20) (0x290afc0) Create stream\nI1014 13:49:35.466966 318 log.go:181] (0x290ad20) (0x290afc0) Stream added, broadcasting: 5\nI1014 13:49:35.468660 318 log.go:181] (0x290ad20) Reply frame received for 5\nI1014 13:49:35.540430 318 log.go:181] (0x290ad20) Data frame received for 5\nI1014 13:49:35.540759 318 log.go:181] (0x290ad20) Data frame received for 3\nI1014 13:49:35.540949 318 log.go:181] (0x2fa0070) (3) Data frame handling\nI1014 13:49:35.541078 318 log.go:181] (0x290afc0) (5) Data frame handling\nI1014 13:49:35.541819 318 log.go:181] (0x290afc0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1014 13:49:35.542468 318 log.go:181] (0x2fa0070) (3) Data frame sent\nI1014 13:49:35.542658 318 log.go:181] (0x290ad20) Data frame received for 5\nI1014 13:49:35.542736 318 log.go:181] (0x290afc0) (5) Data frame handling\nI1014 13:49:35.542863 318 log.go:181] (0x290ad20) Data frame received for 3\nI1014 13:49:35.542999 318 log.go:181] (0x2fa0070) (3) Data frame handling\nI1014 13:49:35.543620 318 log.go:181] (0x290ad20) Data frame received for 1\nI1014 13:49:35.543707 318 log.go:181] (0x290ad90) (1) Data frame handling\nI1014 13:49:35.543810 318 log.go:181] (0x290ad90) (1) Data frame sent\nI1014 13:49:35.545330 318 log.go:181] (0x290ad20) (0x290ad90) Stream removed, broadcasting: 1\nI1014 13:49:35.547454 318 log.go:181] (0x290ad20) Go away received\nI1014 13:49:35.548978 318 log.go:181] (0x290ad20) (0x290ad90) Stream removed, broadcasting: 1\nI1014 13:49:35.549478 318 log.go:181] (0x290ad20) (0x2fa0070) Stream removed, broadcasting: 3\nI1014 13:49:35.549731 318 log.go:181] (0x290ad20) (0x290afc0) Stream removed, broadcasting: 5\n" Oct 14 13:49:35.559: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 14 13:49:35.559: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 14 13:49:35.560: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5638 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 14 13:49:37.201: INFO: stderr: "I1014 13:49:37.035382 338 log.go:181] (0x2cca2a0) (0x2cca310) Create stream\nI1014 13:49:37.037989 338 log.go:181] (0x2cca2a0) (0x2cca310) Stream added, broadcasting: 1\nI1014 13:49:37.049908 338 log.go:181] (0x2cca2a0) Reply frame received for 1\nI1014 13:49:37.051117 338 log.go:181] (0x2cca2a0) (0x26d4380) Create stream\nI1014 13:49:37.051279 338 log.go:181] (0x2cca2a0) (0x26d4380) Stream added, broadcasting: 3\nI1014 13:49:37.053627 338 log.go:181] (0x2cca2a0) Reply frame received for 3\nI1014 13:49:37.053878 338 log.go:181] (0x2cca2a0) (0x26d45b0) Create stream\nI1014 13:49:37.053941 338 log.go:181] (0x2cca2a0) (0x26d45b0) Stream added, broadcasting: 5\nI1014 13:49:37.055246 338 log.go:181] (0x2cca2a0) Reply frame received for 5\nI1014 13:49:37.153807 338 log.go:181] (0x2cca2a0) Data frame received for 5\nI1014 13:49:37.154136 338 log.go:181] (0x26d45b0) (5) Data frame handling\nI1014 13:49:37.154729 338 log.go:181] (0x26d45b0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1014 13:49:37.181820 338 log.go:181] (0x2cca2a0) Data frame received for 3\nI1014 13:49:37.181990 338 log.go:181] (0x26d4380) (3) Data frame handling\nI1014 13:49:37.182184 338 log.go:181] (0x26d4380) (3) Data frame sent\nI1014 13:49:37.182311 338 log.go:181] (0x2cca2a0) Data frame received for 3\nI1014 13:49:37.182441 338 log.go:181] (0x2cca2a0) Data frame received for 5\nI1014 13:49:37.182659 338 log.go:181] (0x26d45b0) (5) Data frame handling\nI1014 13:49:37.183047 338 log.go:181] (0x26d4380) (3) Data frame handling\nI1014 13:49:37.183629 338 log.go:181] (0x2cca2a0) Data frame received for 1\nI1014 13:49:37.183722 338 log.go:181] (0x2cca310) (1) Data frame handling\nI1014 13:49:37.183842 338 log.go:181] (0x2cca310) (1) Data frame sent\nI1014 13:49:37.186488 338 log.go:181] (0x2cca2a0) (0x2cca310) Stream removed, broadcasting: 1\nI1014 13:49:37.187735 338 log.go:181] (0x2cca2a0) Go away received\nI1014 13:49:37.191439 338 log.go:181] (0x2cca2a0) (0x2cca310) Stream removed, broadcasting: 1\nI1014 13:49:37.191738 338 log.go:181] (0x2cca2a0) (0x26d4380) Stream removed, broadcasting: 3\nI1014 13:49:37.192028 338 log.go:181] (0x2cca2a0) (0x26d45b0) Stream removed, broadcasting: 5\n" Oct 14 13:49:37.202: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 14 13:49:37.202: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 14 13:49:37.203: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5638 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 14 13:49:38.797: INFO: stderr: "I1014 13:49:38.615895 358 log.go:181] (0x298a000) (0x298a1c0) Create stream\nI1014 13:49:38.618928 358 log.go:181] (0x298a000) (0x298a1c0) Stream added, broadcasting: 1\nI1014 13:49:38.631404 358 log.go:181] (0x298a000) Reply frame received for 1\nI1014 13:49:38.632525 358 log.go:181] (0x298a000) (0x2ea4070) Create stream\nI1014 13:49:38.632698 358 log.go:181] (0x298a000) (0x2ea4070) Stream added, broadcasting: 3\nI1014 13:49:38.636047 358 log.go:181] (0x298a000) Reply frame received for 3\nI1014 13:49:38.636322 358 log.go:181] (0x298a000) (0x247ca80) Create stream\nI1014 13:49:38.636392 358 log.go:181] (0x298a000) (0x247ca80) Stream added, broadcasting: 5\nI1014 13:49:38.637663 358 log.go:181] (0x298a000) Reply frame received for 5\nI1014 13:49:38.747410 358 log.go:181] (0x298a000) Data frame received for 5\nI1014 13:49:38.747709 358 log.go:181] (0x247ca80) (5) Data frame handling\nI1014 13:49:38.748217 358 log.go:181] (0x247ca80) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1014 13:49:38.777099 358 log.go:181] (0x298a000) Data frame received for 3\nI1014 13:49:38.777286 358 log.go:181] (0x2ea4070) (3) Data frame handling\nI1014 13:49:38.777498 358 log.go:181] (0x2ea4070) (3) Data frame sent\nI1014 13:49:38.777666 358 log.go:181] (0x298a000) Data frame received for 3\nI1014 13:49:38.777850 358 log.go:181] (0x2ea4070) (3) Data frame handling\nI1014 13:49:38.778130 358 log.go:181] (0x298a000) Data frame received for 5\nI1014 13:49:38.778349 358 log.go:181] (0x247ca80) (5) Data frame handling\nI1014 13:49:38.779745 358 log.go:181] (0x298a000) Data frame received for 1\nI1014 13:49:38.779889 358 log.go:181] (0x298a1c0) (1) Data frame handling\nI1014 13:49:38.780108 358 log.go:181] (0x298a1c0) (1) Data frame sent\nI1014 13:49:38.781137 358 log.go:181] (0x298a000) (0x298a1c0) Stream removed, broadcasting: 1\nI1014 13:49:38.784673 358 log.go:181] (0x298a000) Go away received\nI1014 13:49:38.788134 358 log.go:181] (0x298a000) (0x298a1c0) Stream removed, broadcasting: 1\nI1014 13:49:38.788604 358 log.go:181] (0x298a000) (0x2ea4070) Stream removed, broadcasting: 3\nI1014 13:49:38.788806 358 log.go:181] (0x298a000) (0x247ca80) Stream removed, broadcasting: 5\n" Oct 14 13:49:38.798: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 14 13:49:38.798: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 14 13:49:38.799: INFO: Waiting for statefulset status.replicas updated to 0 Oct 14 13:49:38.804: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Oct 14 13:49:48.823: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Oct 14 13:49:48.823: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Oct 14 13:49:48.823: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Oct 14 13:49:48.892: INFO: POD NODE PHASE GRACE CONDITIONS Oct 14 13:49:48.892: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 13:48:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 13:49:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 13:49:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 13:48:57 +0000 UTC }] Oct 14 13:49:48.893: INFO: ss-1 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 13:49:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 13:49:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 13:49:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 13:49:18 +0000 UTC }] Oct 14 13:49:48.894: INFO: ss-2 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 13:49:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 13:49:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 13:49:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 13:49:18 +0000 UTC }] Oct 14 13:49:48.894: INFO: Oct 14 13:49:48.894: INFO: StatefulSet ss has not reached scale 0, at 3 Oct 14 13:49:50.174: INFO: POD NODE PHASE GRACE CONDITIONS Oct 14 13:49:50.174: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 13:48:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 13:49:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 13:49:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 13:48:57 +0000 UTC }] Oct 14 13:49:50.175: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 13:49:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 13:49:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 13:49:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 13:49:18 +0000 UTC }] Oct 14 13:49:50.176: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 13:49:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 13:49:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 13:49:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 13:49:18 +0000 UTC }] Oct 14 13:49:50.176: INFO: Oct 14 13:49:50.176: INFO: StatefulSet ss has not reached scale 0, at 3 Oct 14 13:49:51.195: INFO: POD NODE PHASE GRACE CONDITIONS Oct 14 13:49:51.196: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 13:48:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 13:49:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 13:49:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 13:48:57 +0000 UTC }] Oct 14 13:49:51.196: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 13:49:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 13:49:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 13:49:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 13:49:18 +0000 UTC }] Oct 14 13:49:51.197: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 13:49:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 13:49:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 13:49:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 13:49:18 +0000 UTC }] Oct 14 13:49:51.197: INFO: Oct 14 13:49:51.197: INFO: StatefulSet ss has not reached scale 0, at 3 Oct 14 13:49:52.211: INFO: POD NODE PHASE GRACE CONDITIONS Oct 14 13:49:52.211: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 13:48:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 13:49:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 13:49:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 13:48:57 +0000 UTC }] Oct 14 13:49:52.211: INFO: Oct 14 13:49:52.212: INFO: StatefulSet ss has not reached scale 0, at 1 Oct 14 13:49:53.221: INFO: POD NODE PHASE GRACE CONDITIONS Oct 14 13:49:53.221: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 13:48:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 13:49:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 13:49:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 13:48:57 +0000 UTC }] Oct 14 13:49:53.221: INFO: Oct 14 13:49:53.221: INFO: StatefulSet ss has not reached scale 0, at 1 Oct 14 13:49:54.230: INFO: POD NODE PHASE GRACE CONDITIONS Oct 14 13:49:54.231: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 13:48:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 13:49:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 13:49:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 13:48:57 +0000 UTC }] Oct 14 13:49:54.231: INFO: Oct 14 13:49:54.231: INFO: StatefulSet ss has not reached scale 0, at 1 Oct 14 13:49:55.241: INFO: POD NODE PHASE GRACE CONDITIONS Oct 14 13:49:55.241: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 13:48:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 13:49:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 13:49:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 13:48:57 +0000 UTC }] Oct 14 13:49:55.242: INFO: Oct 14 13:49:55.242: INFO: StatefulSet ss has not reached scale 0, at 1 Oct 14 13:49:56.249: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.59291289s Oct 14 13:49:57.257: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.585159431s Oct 14 13:49:58.265: INFO: Verifying statefulset ss doesn't scale past 0 for another 577.114449ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5638 Oct 14 13:49:59.274: INFO: Scaling statefulset ss to 0 Oct 14 13:49:59.293: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Oct 14 13:49:59.298: INFO: Deleting all statefulset in ns statefulset-5638 Oct 14 13:49:59.302: INFO: Scaling statefulset ss to 0 Oct 14 13:49:59.315: INFO: Waiting for statefulset status.replicas updated to 0 Oct 14 13:49:59.320: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:49:59.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5638" for this suite. • [SLOW TEST:62.475 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":303,"completed":27,"skipped":372,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:49:59.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token Oct 14 13:49:59.963: INFO: created pod pod-service-account-defaultsa Oct 14 13:49:59.963: INFO: pod pod-service-account-defaultsa service account token volume mount: true Oct 14 13:49:59.971: INFO: created pod pod-service-account-mountsa Oct 14 13:49:59.971: INFO: pod pod-service-account-mountsa service account token volume mount: true Oct 14 13:50:00.026: INFO: created pod pod-service-account-nomountsa Oct 14 13:50:00.026: INFO: pod pod-service-account-nomountsa service account token volume mount: false Oct 14 13:50:00.054: INFO: created pod pod-service-account-defaultsa-mountspec Oct 14 13:50:00.055: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Oct 14 13:50:00.171: INFO: created pod pod-service-account-mountsa-mountspec Oct 14 13:50:00.171: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Oct 14 13:50:00.249: INFO: created pod pod-service-account-nomountsa-mountspec Oct 14 13:50:00.249: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Oct 14 13:50:00.328: INFO: created pod pod-service-account-defaultsa-nomountspec Oct 14 13:50:00.328: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Oct 14 13:50:00.343: INFO: created pod pod-service-account-mountsa-nomountspec Oct 14 13:50:00.343: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Oct 14 13:50:00.511: INFO: created pod pod-service-account-nomountsa-nomountspec Oct 14 13:50:00.511: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:50:00.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-6674" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":303,"completed":28,"skipped":384,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:50:00.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:50:14.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7877" for this suite. • [SLOW TEST:14.079 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":303,"completed":29,"skipped":417,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:50:14.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-27a7197f-0fba-4372-9db5-70701ff06881 STEP: Creating a pod to test consume configMaps Oct 14 13:50:15.412: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-362c3ec5-1233-46f0-9125-e0ae0b36dd0d" in namespace "projected-3821" to be "Succeeded or Failed" Oct 14 13:50:15.455: INFO: Pod "pod-projected-configmaps-362c3ec5-1233-46f0-9125-e0ae0b36dd0d": Phase="Pending", Reason="", readiness=false. Elapsed: 42.34744ms Oct 14 13:50:17.658: INFO: Pod "pod-projected-configmaps-362c3ec5-1233-46f0-9125-e0ae0b36dd0d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.246275766s Oct 14 13:50:19.667: INFO: Pod "pod-projected-configmaps-362c3ec5-1233-46f0-9125-e0ae0b36dd0d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.254529126s Oct 14 13:50:21.677: INFO: Pod "pod-projected-configmaps-362c3ec5-1233-46f0-9125-e0ae0b36dd0d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.264425257s STEP: Saw pod success Oct 14 13:50:21.677: INFO: Pod "pod-projected-configmaps-362c3ec5-1233-46f0-9125-e0ae0b36dd0d" satisfied condition "Succeeded or Failed" Oct 14 13:50:21.683: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-362c3ec5-1233-46f0-9125-e0ae0b36dd0d container projected-configmap-volume-test: STEP: delete the pod Oct 14 13:50:21.717: INFO: Waiting for pod pod-projected-configmaps-362c3ec5-1233-46f0-9125-e0ae0b36dd0d to disappear Oct 14 13:50:21.730: INFO: Pod pod-projected-configmaps-362c3ec5-1233-46f0-9125-e0ae0b36dd0d no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:50:21.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3821" for this suite. • [SLOW TEST:7.006 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":303,"completed":30,"skipped":431,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:50:21.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Oct 14 13:50:21.887: INFO: Waiting up to 5m0s for pod "downward-api-eb607169-32de-44f3-884e-db733452c998" in namespace "downward-api-1071" to be "Succeeded or Failed" Oct 14 13:50:21.895: INFO: Pod "downward-api-eb607169-32de-44f3-884e-db733452c998": Phase="Pending", Reason="", readiness=false. Elapsed: 7.55467ms Oct 14 13:50:24.207: INFO: Pod "downward-api-eb607169-32de-44f3-884e-db733452c998": Phase="Pending", Reason="", readiness=false. Elapsed: 2.319750226s Oct 14 13:50:26.215: INFO: Pod "downward-api-eb607169-32de-44f3-884e-db733452c998": Phase="Running", Reason="", readiness=true. Elapsed: 4.327711905s Oct 14 13:50:28.224: INFO: Pod "downward-api-eb607169-32de-44f3-884e-db733452c998": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.337012747s STEP: Saw pod success Oct 14 13:50:28.224: INFO: Pod "downward-api-eb607169-32de-44f3-884e-db733452c998" satisfied condition "Succeeded or Failed" Oct 14 13:50:28.230: INFO: Trying to get logs from node latest-worker pod downward-api-eb607169-32de-44f3-884e-db733452c998 container dapi-container: STEP: delete the pod Oct 14 13:50:28.281: INFO: Waiting for pod downward-api-eb607169-32de-44f3-884e-db733452c998 to disappear Oct 14 13:50:28.338: INFO: Pod downward-api-eb607169-32de-44f3-884e-db733452c998 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:50:28.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1071" for this suite. • [SLOW TEST:6.671 seconds] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":303,"completed":31,"skipped":502,"failed":0} SSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:50:28.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:50:28.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2052" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":303,"completed":32,"skipped":508,"failed":0} SSSSSSSSSS ------------------------------ [sig-node] PodTemplates should delete a collection of pod templates [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] PodTemplates /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:50:28.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of pod templates [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of pod templates Oct 14 13:50:28.702: INFO: created test-podtemplate-1 Oct 14 13:50:28.708: INFO: created test-podtemplate-2 Oct 14 13:50:28.721: INFO: created test-podtemplate-3 STEP: get a list of pod templates with a label in the current namespace STEP: delete collection of pod templates Oct 14 13:50:28.736: INFO: requesting DeleteCollection of pod templates STEP: check that the list of pod templates matches the requested quantity Oct 14 13:50:28.777: INFO: requesting list of pod templates to confirm quantity [AfterEach] [sig-node] PodTemplates /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:50:28.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-3709" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":303,"completed":33,"skipped":518,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:50:28.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-854125b4-426d-4232-a8fb-71b4e4ef88cb STEP: Creating a pod to test consume secrets Oct 14 13:50:28.946: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5e3e1833-4f73-47c4-a435-521fe1e6df47" in namespace "projected-8981" to be "Succeeded or Failed" Oct 14 13:50:28.988: INFO: Pod "pod-projected-secrets-5e3e1833-4f73-47c4-a435-521fe1e6df47": Phase="Pending", Reason="", readiness=false. Elapsed: 42.433214ms Oct 14 13:50:31.124: INFO: Pod "pod-projected-secrets-5e3e1833-4f73-47c4-a435-521fe1e6df47": Phase="Pending", Reason="", readiness=false. Elapsed: 2.177997517s Oct 14 13:50:33.131: INFO: Pod "pod-projected-secrets-5e3e1833-4f73-47c4-a435-521fe1e6df47": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.185330465s STEP: Saw pod success Oct 14 13:50:33.131: INFO: Pod "pod-projected-secrets-5e3e1833-4f73-47c4-a435-521fe1e6df47" satisfied condition "Succeeded or Failed" Oct 14 13:50:33.135: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-5e3e1833-4f73-47c4-a435-521fe1e6df47 container projected-secret-volume-test: STEP: delete the pod Oct 14 13:50:33.169: INFO: Waiting for pod pod-projected-secrets-5e3e1833-4f73-47c4-a435-521fe1e6df47 to disappear Oct 14 13:50:33.182: INFO: Pod pod-projected-secrets-5e3e1833-4f73-47c4-a435-521fe1e6df47 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:50:33.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8981" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":34,"skipped":525,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:50:33.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with configMap that has name projected-configmap-test-upd-cb17cc70-b088-46b8-b303-d0a2da14fa65 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-cb17cc70-b088-46b8-b303-d0a2da14fa65 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:50:39.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4279" for this suite. • [SLOW TEST:6.440 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":35,"skipped":534,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:50:39.641: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 14 13:50:39.729: INFO: Waiting up to 5m0s for pod "downwardapi-volume-45850637-bcab-440e-ad55-e7fb6865ed35" in namespace "projected-5044" to be "Succeeded or Failed" Oct 14 13:50:39.759: INFO: Pod "downwardapi-volume-45850637-bcab-440e-ad55-e7fb6865ed35": Phase="Pending", Reason="", readiness=false. Elapsed: 29.159402ms Oct 14 13:50:41.766: INFO: Pod "downwardapi-volume-45850637-bcab-440e-ad55-e7fb6865ed35": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036599348s Oct 14 13:50:43.774: INFO: Pod "downwardapi-volume-45850637-bcab-440e-ad55-e7fb6865ed35": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044600943s STEP: Saw pod success Oct 14 13:50:43.774: INFO: Pod "downwardapi-volume-45850637-bcab-440e-ad55-e7fb6865ed35" satisfied condition "Succeeded or Failed" Oct 14 13:50:43.778: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-45850637-bcab-440e-ad55-e7fb6865ed35 container client-container: STEP: delete the pod Oct 14 13:50:43.808: INFO: Waiting for pod downwardapi-volume-45850637-bcab-440e-ad55-e7fb6865ed35 to disappear Oct 14 13:50:43.812: INFO: Pod downwardapi-volume-45850637-bcab-440e-ad55-e7fb6865ed35 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:50:43.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5044" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":303,"completed":36,"skipped":581,"failed":0} ------------------------------ [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-instrumentation] Events API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:50:43.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing events in all namespaces STEP: listing events in test namespace STEP: listing events with field selection filtering on source STEP: listing events with field selection filtering on reportingController STEP: getting the test event STEP: patching the test event STEP: getting the test event STEP: updating the test event STEP: getting the test event STEP: deleting the test event STEP: listing events in all namespaces STEP: listing events in test namespace [AfterEach] [sig-instrumentation] Events API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:50:44.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-5520" for this suite. •{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":303,"completed":37,"skipped":581,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:50:44.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:161 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:50:44.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8157" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":303,"completed":38,"skipped":595,"failed":0} SSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:50:44.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:51:02.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-2147" for this suite. • [SLOW TEST:18.178 seconds] [sig-apps] Job /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":303,"completed":39,"skipped":599,"failed":0} [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:51:02.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium Oct 14 13:51:02.517: INFO: Waiting up to 5m0s for pod "pod-4dc606fd-373c-4c79-8cbb-cb99eb0aec7b" in namespace "emptydir-8962" to be "Succeeded or Failed" Oct 14 13:51:02.526: INFO: Pod "pod-4dc606fd-373c-4c79-8cbb-cb99eb0aec7b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.310329ms Oct 14 13:51:04.536: INFO: Pod "pod-4dc606fd-373c-4c79-8cbb-cb99eb0aec7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0187574s Oct 14 13:51:06.545: INFO: Pod "pod-4dc606fd-373c-4c79-8cbb-cb99eb0aec7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027671415s STEP: Saw pod success Oct 14 13:51:06.545: INFO: Pod "pod-4dc606fd-373c-4c79-8cbb-cb99eb0aec7b" satisfied condition "Succeeded or Failed" Oct 14 13:51:06.620: INFO: Trying to get logs from node latest-worker pod pod-4dc606fd-373c-4c79-8cbb-cb99eb0aec7b container test-container: STEP: delete the pod Oct 14 13:51:06.700: INFO: Waiting for pod pod-4dc606fd-373c-4c79-8cbb-cb99eb0aec7b to disappear Oct 14 13:51:06.710: INFO: Pod pod-4dc606fd-373c-4c79-8cbb-cb99eb0aec7b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:51:06.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8962" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":40,"skipped":599,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:51:06.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on tmpfs Oct 14 13:51:06.830: INFO: Waiting up to 5m0s for pod "pod-812b420a-966e-43af-a594-6faa197ff40d" in namespace "emptydir-1635" to be "Succeeded or Failed" Oct 14 13:51:06.842: INFO: Pod "pod-812b420a-966e-43af-a594-6faa197ff40d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.185485ms Oct 14 13:51:08.926: INFO: Pod "pod-812b420a-966e-43af-a594-6faa197ff40d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095587427s Oct 14 13:51:10.935: INFO: Pod "pod-812b420a-966e-43af-a594-6faa197ff40d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.104821644s STEP: Saw pod success Oct 14 13:51:10.935: INFO: Pod "pod-812b420a-966e-43af-a594-6faa197ff40d" satisfied condition "Succeeded or Failed" Oct 14 13:51:10.941: INFO: Trying to get logs from node latest-worker pod pod-812b420a-966e-43af-a594-6faa197ff40d container test-container: STEP: delete the pod Oct 14 13:51:10.999: INFO: Waiting for pod pod-812b420a-966e-43af-a594-6faa197ff40d to disappear Oct 14 13:51:11.022: INFO: Pod pod-812b420a-966e-43af-a594-6faa197ff40d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:51:11.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1635" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":41,"skipped":604,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:51:11.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-43673598-3b4b-4e1f-b8fa-c24f85c2c827 STEP: Creating configMap with name cm-test-opt-upd-f99bb080-17eb-47e3-9e1f-bcdb9f30b82f STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-43673598-3b4b-4e1f-b8fa-c24f85c2c827 STEP: Updating configmap cm-test-opt-upd-f99bb080-17eb-47e3-9e1f-bcdb9f30b82f STEP: Creating configMap with name cm-test-opt-create-dc52c63a-67e7-4407-bebe-6e3776f250f5 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:51:19.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-527" for this suite. • [SLOW TEST:8.290 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":42,"skipped":617,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:51:19.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-1ca2bf84-066b-4d87-b008-0162530aa82a STEP: Creating a pod to test consume configMaps Oct 14 13:51:19.466: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3173a467-2942-45d6-b072-1a8c6e0afbe8" in namespace "projected-8441" to be "Succeeded or Failed" Oct 14 13:51:19.505: INFO: Pod "pod-projected-configmaps-3173a467-2942-45d6-b072-1a8c6e0afbe8": Phase="Pending", Reason="", readiness=false. Elapsed: 38.998962ms Oct 14 13:51:21.555: INFO: Pod "pod-projected-configmaps-3173a467-2942-45d6-b072-1a8c6e0afbe8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0890179s Oct 14 13:51:23.562: INFO: Pod "pod-projected-configmaps-3173a467-2942-45d6-b072-1a8c6e0afbe8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.096535869s STEP: Saw pod success Oct 14 13:51:23.562: INFO: Pod "pod-projected-configmaps-3173a467-2942-45d6-b072-1a8c6e0afbe8" satisfied condition "Succeeded or Failed" Oct 14 13:51:23.569: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-3173a467-2942-45d6-b072-1a8c6e0afbe8 container projected-configmap-volume-test: STEP: delete the pod Oct 14 13:51:23.749: INFO: Waiting for pod pod-projected-configmaps-3173a467-2942-45d6-b072-1a8c6e0afbe8 to disappear Oct 14 13:51:23.807: INFO: Pod pod-projected-configmaps-3173a467-2942-45d6-b072-1a8c6e0afbe8 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:51:23.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8441" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":43,"skipped":651,"failed":0} SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:51:23.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-tcj7 STEP: Creating a pod to test atomic-volume-subpath Oct 14 13:51:23.956: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-tcj7" in namespace "subpath-7062" to be "Succeeded or Failed" Oct 14 13:51:24.005: INFO: Pod "pod-subpath-test-configmap-tcj7": Phase="Pending", Reason="", readiness=false. Elapsed: 48.802401ms Oct 14 13:51:26.024: INFO: Pod "pod-subpath-test-configmap-tcj7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067125737s Oct 14 13:51:28.039: INFO: Pod "pod-subpath-test-configmap-tcj7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082516896s Oct 14 13:51:30.150: INFO: Pod "pod-subpath-test-configmap-tcj7": Phase="Running", Reason="", readiness=true. Elapsed: 6.193468677s Oct 14 13:51:32.159: INFO: Pod "pod-subpath-test-configmap-tcj7": Phase="Running", Reason="", readiness=true. Elapsed: 8.202629696s Oct 14 13:51:34.167: INFO: Pod "pod-subpath-test-configmap-tcj7": Phase="Running", Reason="", readiness=true. Elapsed: 10.210721638s Oct 14 13:51:36.177: INFO: Pod "pod-subpath-test-configmap-tcj7": Phase="Running", Reason="", readiness=true. Elapsed: 12.2199988s Oct 14 13:51:38.185: INFO: Pod "pod-subpath-test-configmap-tcj7": Phase="Running", Reason="", readiness=true. Elapsed: 14.228087193s Oct 14 13:51:40.194: INFO: Pod "pod-subpath-test-configmap-tcj7": Phase="Running", Reason="", readiness=true. Elapsed: 16.237105334s Oct 14 13:51:42.203: INFO: Pod "pod-subpath-test-configmap-tcj7": Phase="Running", Reason="", readiness=true. Elapsed: 18.246197954s Oct 14 13:51:44.211: INFO: Pod "pod-subpath-test-configmap-tcj7": Phase="Running", Reason="", readiness=true. Elapsed: 20.253831309s Oct 14 13:51:46.219: INFO: Pod "pod-subpath-test-configmap-tcj7": Phase="Running", Reason="", readiness=true. Elapsed: 22.262074267s Oct 14 13:51:48.227: INFO: Pod "pod-subpath-test-configmap-tcj7": Phase="Running", Reason="", readiness=true. Elapsed: 24.27042295s Oct 14 13:51:50.235: INFO: Pod "pod-subpath-test-configmap-tcj7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.278688921s STEP: Saw pod success Oct 14 13:51:50.236: INFO: Pod "pod-subpath-test-configmap-tcj7" satisfied condition "Succeeded or Failed" Oct 14 13:51:50.241: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-tcj7 container test-container-subpath-configmap-tcj7: STEP: delete the pod Oct 14 13:51:50.264: INFO: Waiting for pod pod-subpath-test-configmap-tcj7 to disappear Oct 14 13:51:50.304: INFO: Pod pod-subpath-test-configmap-tcj7 no longer exists STEP: Deleting pod pod-subpath-test-configmap-tcj7 Oct 14 13:51:50.304: INFO: Deleting pod "pod-subpath-test-configmap-tcj7" in namespace "subpath-7062" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:51:50.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7062" for this suite. • [SLOW TEST:26.528 seconds] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":303,"completed":44,"skipped":653,"failed":0} SS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:51:50.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:51:54.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7313" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":45,"skipped":655,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:51:54.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-260561ee-e9b3-4133-9a07-5455e87b42ed STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:51:58.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9965" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":46,"skipped":680,"failed":0} SSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:51:58.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-5723 [It] should have a working scale subresource [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating statefulset ss in namespace statefulset-5723 Oct 14 13:51:58.942: INFO: Found 0 stateful pods, waiting for 1 Oct 14 13:52:08.951: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Oct 14 13:52:08.995: INFO: Deleting all statefulset in ns statefulset-5723 Oct 14 13:52:09.025: INFO: Scaling statefulset ss to 0 Oct 14 13:52:29.128: INFO: Waiting for statefulset status.replicas updated to 0 Oct 14 13:52:29.134: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:52:29.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5723" for this suite. • [SLOW TEST:30.385 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have a working scale subresource [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":303,"completed":47,"skipped":684,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:52:29.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-8c24fe79-7bc3-46bb-b87a-cde409be1a28 STEP: Creating a pod to test consume secrets Oct 14 13:52:29.280: INFO: Waiting up to 5m0s for pod "pod-secrets-d94b4076-5246-4b78-bcd7-027566ea70a1" in namespace "secrets-9254" to be "Succeeded or Failed" Oct 14 13:52:29.334: INFO: Pod "pod-secrets-d94b4076-5246-4b78-bcd7-027566ea70a1": Phase="Pending", Reason="", readiness=false. Elapsed: 53.808693ms Oct 14 13:52:31.343: INFO: Pod "pod-secrets-d94b4076-5246-4b78-bcd7-027566ea70a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062727358s Oct 14 13:52:33.352: INFO: Pod "pod-secrets-d94b4076-5246-4b78-bcd7-027566ea70a1": Phase="Running", Reason="", readiness=true. Elapsed: 4.071410985s Oct 14 13:52:35.360: INFO: Pod "pod-secrets-d94b4076-5246-4b78-bcd7-027566ea70a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.079764921s STEP: Saw pod success Oct 14 13:52:35.360: INFO: Pod "pod-secrets-d94b4076-5246-4b78-bcd7-027566ea70a1" satisfied condition "Succeeded or Failed" Oct 14 13:52:35.366: INFO: Trying to get logs from node latest-worker pod pod-secrets-d94b4076-5246-4b78-bcd7-027566ea70a1 container secret-volume-test: STEP: delete the pod Oct 14 13:52:35.396: INFO: Waiting for pod pod-secrets-d94b4076-5246-4b78-bcd7-027566ea70a1 to disappear Oct 14 13:52:35.409: INFO: Pod pod-secrets-d94b4076-5246-4b78-bcd7-027566ea70a1 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:52:35.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9254" for this suite. • [SLOW TEST:6.211 seconds] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":48,"skipped":706,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:52:35.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-4226 STEP: creating service affinity-clusterip in namespace services-4226 STEP: creating replication controller affinity-clusterip in namespace services-4226 I1014 13:52:35.567229 11 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-4226, replica count: 3 I1014 13:52:38.618869 11 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1014 13:52:41.619888 11 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 14 13:52:41.756: INFO: Creating new exec pod Oct 14 13:52:46.818: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-4226 execpod-affinityk7h7n -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Oct 14 13:52:48.353: INFO: stderr: "I1014 13:52:48.228600 378 log.go:181] (0x2a7a0e0) (0x2a7a150) Create stream\nI1014 13:52:48.230492 378 log.go:181] (0x2a7a0e0) (0x2a7a150) Stream added, broadcasting: 1\nI1014 13:52:48.238219 378 log.go:181] (0x2a7a0e0) Reply frame received for 1\nI1014 13:52:48.238621 378 log.go:181] (0x2a7a0e0) (0x293a930) Create stream\nI1014 13:52:48.238693 378 log.go:181] (0x2a7a0e0) (0x293a930) Stream added, broadcasting: 3\nI1014 13:52:48.240418 378 log.go:181] (0x2a7a0e0) Reply frame received for 3\nI1014 13:52:48.241155 378 log.go:181] (0x2a7a0e0) (0x25d8070) Create stream\nI1014 13:52:48.241317 378 log.go:181] (0x2a7a0e0) (0x25d8070) Stream added, broadcasting: 5\nI1014 13:52:48.243190 378 log.go:181] (0x2a7a0e0) Reply frame received for 5\nI1014 13:52:48.330920 378 log.go:181] (0x2a7a0e0) Data frame received for 3\nI1014 13:52:48.331328 378 log.go:181] (0x2a7a0e0) Data frame received for 5\nI1014 13:52:48.331595 378 log.go:181] (0x25d8070) (5) Data frame handling\nI1014 13:52:48.331884 378 log.go:181] (0x293a930) (3) Data frame handling\nI1014 13:52:48.332294 378 log.go:181] (0x2a7a0e0) Data frame received for 1\nI1014 13:52:48.332492 378 log.go:181] (0x2a7a150) (1) Data frame handling\nI1014 13:52:48.333218 378 log.go:181] (0x25d8070) (5) Data frame sent\nI1014 13:52:48.333708 378 log.go:181] (0x2a7a150) (1) Data frame sent\nI1014 13:52:48.334462 378 log.go:181] (0x2a7a0e0) Data frame received for 5\nI1014 13:52:48.334634 378 log.go:181] (0x25d8070) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\nI1014 13:52:48.338714 378 log.go:181] (0x2a7a0e0) (0x2a7a150) Stream removed, broadcasting: 1\nI1014 13:52:48.339247 378 log.go:181] (0x2a7a0e0) Go away received\nI1014 13:52:48.342995 378 log.go:181] (0x2a7a0e0) (0x2a7a150) Stream removed, broadcasting: 1\nI1014 13:52:48.343234 378 log.go:181] (0x2a7a0e0) (0x293a930) Stream removed, broadcasting: 3\nI1014 13:52:48.343419 378 log.go:181] (0x2a7a0e0) (0x25d8070) Stream removed, broadcasting: 5\n" Oct 14 13:52:48.354: INFO: stdout: "" Oct 14 13:52:48.359: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-4226 execpod-affinityk7h7n -- /bin/sh -x -c nc -zv -t -w 2 10.110.242.236 80' Oct 14 13:52:49.831: INFO: stderr: "I1014 13:52:49.725759 398 log.go:181] (0x2602000) (0x2602070) Create stream\nI1014 13:52:49.729439 398 log.go:181] (0x2602000) (0x2602070) Stream added, broadcasting: 1\nI1014 13:52:49.740304 398 log.go:181] (0x2602000) Reply frame received for 1\nI1014 13:52:49.740891 398 log.go:181] (0x2602000) (0x2745880) Create stream\nI1014 13:52:49.740960 398 log.go:181] (0x2602000) (0x2745880) Stream added, broadcasting: 3\nI1014 13:52:49.742130 398 log.go:181] (0x2602000) Reply frame received for 3\nI1014 13:52:49.742376 398 log.go:181] (0x2602000) (0x2dd6000) Create stream\nI1014 13:52:49.742433 398 log.go:181] (0x2602000) (0x2dd6000) Stream added, broadcasting: 5\nI1014 13:52:49.743624 398 log.go:181] (0x2602000) Reply frame received for 5\nI1014 13:52:49.813958 398 log.go:181] (0x2602000) Data frame received for 3\nI1014 13:52:49.814340 398 log.go:181] (0x2745880) (3) Data frame handling\nI1014 13:52:49.814874 398 log.go:181] (0x2602000) Data frame received for 5\nI1014 13:52:49.815105 398 log.go:181] (0x2dd6000) (5) Data frame handling\nI1014 13:52:49.815468 398 log.go:181] (0x2602000) Data frame received for 1\nI1014 13:52:49.815650 398 log.go:181] (0x2602070) (1) Data frame handling\nI1014 13:52:49.815998 398 log.go:181] (0x2dd6000) (5) Data frame sent\nI1014 13:52:49.816201 398 log.go:181] (0x2602070) (1) Data frame sent\n+ nc -zv -t -w 2 10.110.242.236 80\nConnection to 10.110.242.236 80 port [tcp/http] succeeded!\nI1014 13:52:49.816509 398 log.go:181] (0x2602000) Data frame received for 5\nI1014 13:52:49.816583 398 log.go:181] (0x2dd6000) (5) Data frame handling\nI1014 13:52:49.818428 398 log.go:181] (0x2602000) (0x2602070) Stream removed, broadcasting: 1\nI1014 13:52:49.820747 398 log.go:181] (0x2602000) Go away received\nI1014 13:52:49.822817 398 log.go:181] (0x2602000) (0x2602070) Stream removed, broadcasting: 1\nI1014 13:52:49.823315 398 log.go:181] (0x2602000) (0x2745880) Stream removed, broadcasting: 3\nI1014 13:52:49.823625 398 log.go:181] (0x2602000) (0x2dd6000) Stream removed, broadcasting: 5\n" Oct 14 13:52:49.831: INFO: stdout: "" Oct 14 13:52:49.832: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-4226 execpod-affinityk7h7n -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.110.242.236:80/ ; done' Oct 14 13:52:51.404: INFO: stderr: "I1014 13:52:51.184392 418 log.go:181] (0x281e000) (0x281e070) Create stream\nI1014 13:52:51.186436 418 log.go:181] (0x281e000) (0x281e070) Stream added, broadcasting: 1\nI1014 13:52:51.198344 418 log.go:181] (0x281e000) Reply frame received for 1\nI1014 13:52:51.199568 418 log.go:181] (0x281e000) (0x3086070) Create stream\nI1014 13:52:51.199758 418 log.go:181] (0x281e000) (0x3086070) Stream added, broadcasting: 3\nI1014 13:52:51.201733 418 log.go:181] (0x281e000) Reply frame received for 3\nI1014 13:52:51.201958 418 log.go:181] (0x281e000) (0x29ce070) Create stream\nI1014 13:52:51.202020 418 log.go:181] (0x281e000) (0x29ce070) Stream added, broadcasting: 5\nI1014 13:52:51.203057 418 log.go:181] (0x281e000) Reply frame received for 5\nI1014 13:52:51.288640 418 log.go:181] (0x281e000) Data frame received for 3\nI1014 13:52:51.289079 418 log.go:181] (0x3086070) (3) Data frame handling\nI1014 13:52:51.289349 418 log.go:181] (0x281e000) Data frame received for 5\nI1014 13:52:51.289519 418 log.go:181] (0x29ce070) (5) Data frame handling\nI1014 13:52:51.289703 418 log.go:181] (0x3086070) (3) Data frame sent\nI1014 13:52:51.290085 418 log.go:181] (0x29ce070) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.242.236:80/\nI1014 13:52:51.290915 418 log.go:181] (0x281e000) Data frame received for 3\nI1014 13:52:51.291083 418 log.go:181] (0x3086070) (3) Data frame handling\nI1014 13:52:51.291237 418 log.go:181] (0x3086070) (3) Data frame sent\nI1014 13:52:51.291521 418 log.go:181] (0x281e000) Data frame received for 3\nI1014 13:52:51.291619 418 log.go:181] (0x281e000) Data frame received for 5\nI1014 13:52:51.291720 418 log.go:181] (0x29ce070) (5) Data frame handling\nI1014 13:52:51.291847 418 log.go:181] (0x29ce070) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.242.236:80/\nI1014 13:52:51.291940 418 log.go:181] (0x3086070) (3) Data frame handling\nI1014 13:52:51.292042 418 log.go:181] (0x3086070) (3) Data frame sent\nI1014 13:52:51.297264 418 log.go:181] (0x281e000) Data frame received for 3\nI1014 13:52:51.297458 418 log.go:181] (0x3086070) (3) Data frame handling\nI1014 13:52:51.297667 418 log.go:181] (0x3086070) (3) Data frame sent\nI1014 13:52:51.297831 418 log.go:181] (0x281e000) Data frame received for 5\nI1014 13:52:51.297917 418 log.go:181] (0x29ce070) (5) Data frame handling\nI1014 13:52:51.298029 418 log.go:181] (0x29ce070) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2I1014 13:52:51.298519 418 log.go:181] (0x281e000) Data frame received for 3\nI1014 13:52:51.298678 418 log.go:181] (0x3086070) (3) Data frame handling\nI1014 13:52:51.298787 418 log.go:181] (0x281e000) Data frame received for 5\nI1014 13:52:51.298928 418 log.go:181] (0x29ce070) (5) Data frame handling\n http://10.110.242.236:80/\nI1014 13:52:51.299066 418 log.go:181] (0x29ce070) (5) Data frame sent\nI1014 13:52:51.299301 418 log.go:181] (0x3086070) (3) Data frame sent\nI1014 13:52:51.303779 418 log.go:181] (0x281e000) Data frame received for 3\nI1014 13:52:51.303936 418 log.go:181] (0x3086070) (3) Data frame handling\nI1014 13:52:51.304101 418 log.go:181] (0x3086070) (3) Data frame sent\nI1014 13:52:51.304419 418 log.go:181] (0x281e000) Data frame received for 5\nI1014 13:52:51.304570 418 log.go:181] (0x29ce070) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.242.236:80/\nI1014 13:52:51.304717 418 log.go:181] (0x281e000) Data frame received for 3\nI1014 13:52:51.304826 418 log.go:181] (0x3086070) (3) Data frame handling\nI1014 13:52:51.305040 418 log.go:181] (0x29ce070) (5) Data frame sent\nI1014 13:52:51.305198 418 log.go:181] (0x3086070) (3) Data frame sent\nI1014 13:52:51.308327 418 log.go:181] (0x281e000) Data frame received for 3\nI1014 13:52:51.308410 418 log.go:181] (0x3086070) (3) Data frame handling\nI1014 13:52:51.308520 418 log.go:181] (0x3086070) (3) Data frame sent\nI1014 13:52:51.309159 418 log.go:181] (0x281e000) Data frame received for 5\nI1014 13:52:51.309278 418 log.go:181] (0x29ce070) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.242.236:80/\nI1014 13:52:51.309378 418 log.go:181] (0x281e000) Data frame received for 3\nI1014 13:52:51.309523 418 log.go:181] (0x3086070) (3) Data frame handling\nI1014 13:52:51.309644 418 log.go:181] (0x29ce070) (5) Data frame sent\nI1014 13:52:51.309756 418 log.go:181] (0x3086070) (3) Data frame sent\nI1014 13:52:51.315219 418 log.go:181] (0x281e000) Data frame received for 3\nI1014 13:52:51.315302 418 log.go:181] (0x3086070) (3) Data frame handling\nI1014 13:52:51.315382 418 log.go:181] (0x3086070) (3) Data frame sent\nI1014 13:52:51.315975 418 log.go:181] (0x281e000) Data frame received for 5\nI1014 13:52:51.316073 418 log.go:181] (0x29ce070) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.242.236:80/\nI1014 13:52:51.316170 418 log.go:181] (0x281e000) Data frame received for 3\nI1014 13:52:51.316345 418 log.go:181] (0x3086070) (3) Data frame handling\nI1014 13:52:51.316481 418 log.go:181] (0x29ce070) (5) Data frame sent\nI1014 13:52:51.316604 418 log.go:181] (0x3086070) (3) Data frame sent\nI1014 13:52:51.321018 418 log.go:181] (0x281e000) Data frame received for 3\nI1014 13:52:51.321119 418 log.go:181] (0x3086070) (3) Data frame handling\nI1014 13:52:51.321237 418 log.go:181] (0x3086070) (3) Data frame sent\nI1014 13:52:51.321919 418 log.go:181] (0x281e000) Data frame received for 5\nI1014 13:52:51.322085 418 log.go:181] (0x281e000) Data frame received for 3\nI1014 13:52:51.322261 418 log.go:181] (0x3086070) (3) Data frame handling\nI1014 13:52:51.322398 418 log.go:181] (0x29ce070) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.242.236:80/\nI1014 13:52:51.322556 418 log.go:181] (0x3086070) (3) Data frame sent\nI1014 13:52:51.322723 418 log.go:181] (0x29ce070) (5) Data frame sent\nI1014 13:52:51.327700 418 log.go:181] (0x281e000) Data frame received for 3\nI1014 13:52:51.327833 418 log.go:181] (0x3086070) (3) Data frame handling\nI1014 13:52:51.328007 418 log.go:181] (0x3086070) (3) Data frame sent\nI1014 13:52:51.328435 418 log.go:181] (0x281e000) Data frame received for 5\nI1014 13:52:51.328539 418 log.go:181] (0x29ce070) (5) Data frame handling\nI1014 13:52:51.328669 418 log.go:181] (0x29ce070) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.242.236:80/\nI1014 13:52:51.328776 418 log.go:181] (0x281e000) Data frame received for 3\nI1014 13:52:51.329332 418 log.go:181] (0x3086070) (3) Data frame handling\nI1014 13:52:51.329498 418 log.go:181] (0x3086070) (3) Data frame sent\nI1014 13:52:51.334657 418 log.go:181] (0x281e000) Data frame received for 3\nI1014 13:52:51.334831 418 log.go:181] (0x3086070) (3) Data frame handling\nI1014 13:52:51.334992 418 log.go:181] (0x3086070) (3) Data frame sent\nI1014 13:52:51.335337 418 log.go:181] (0x281e000) Data frame received for 3\nI1014 13:52:51.335469 418 log.go:181] (0x3086070) (3) Data frame handling\nI1014 13:52:51.335598 418 log.go:181] (0x281e000) Data frame received for 5\nI1014 13:52:51.335712 418 log.go:181] (0x29ce070) (5) Data frame handling\nI1014 13:52:51.335793 418 log.go:181] (0x29ce070) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.242.236:80/\nI1014 13:52:51.335900 418 log.go:181] (0x3086070) (3) Data frame sent\nI1014 13:52:51.343086 418 log.go:181] (0x281e000) Data frame received for 3\nI1014 13:52:51.343299 418 log.go:181] (0x3086070) (3) Data frame handling\nI1014 13:52:51.343494 418 log.go:181] (0x3086070) (3) Data frame sent\nI1014 13:52:51.346028 418 log.go:181] (0x281e000) Data frame received for 5\nI1014 13:52:51.346155 418 log.go:181] (0x29ce070) (5) Data frame handling\n+ echo\nI1014 13:52:51.346300 418 log.go:181] (0x281e000) Data frame received for 3\nI1014 13:52:51.346481 418 log.go:181] (0x3086070) (3) Data frame handling\nI1014 13:52:51.346712 418 log.go:181] (0x3086070) (3) Data frame sent\nI1014 13:52:51.346888 418 log.go:181] (0x29ce070) (5) Data frame sent\nI1014 13:52:51.347069 418 log.go:181] (0x281e000) Data frame received for 5\nI1014 13:52:51.347206 418 log.go:181] (0x29ce070) (5) Data frame handling\nI1014 13:52:51.347368 418 log.go:181] (0x29ce070) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.110.242.236:80/\nI1014 13:52:51.355466 418 log.go:181] (0x281e000) Data frame received for 3\nI1014 13:52:51.355599 418 log.go:181] (0x3086070) (3) Data frame handling\nI1014 13:52:51.355736 418 log.go:181] (0x281e000) Data frame received for 5\nI1014 13:52:51.355891 418 log.go:181] (0x29ce070) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.242.236:80/\nI1014 13:52:51.355992 418 log.go:181] (0x3086070) (3) Data frame sent\nI1014 13:52:51.356122 418 log.go:181] (0x29ce070) (5) Data frame sent\nI1014 13:52:51.359614 418 log.go:181] (0x281e000) Data frame received for 3\nI1014 13:52:51.359690 418 log.go:181] (0x3086070) (3) Data frame handling\nI1014 13:52:51.359788 418 log.go:181] (0x3086070) (3) Data frame sent\nI1014 13:52:51.360490 418 log.go:181] (0x281e000) Data frame received for 3\nI1014 13:52:51.360618 418 log.go:181] (0x3086070) (3) Data frame handling\nI1014 13:52:51.360706 418 log.go:181] (0x281e000) Data frame received for 5\nI1014 13:52:51.360819 418 log.go:181] (0x29ce070) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.242.236:80/\nI1014 13:52:51.360969 418 log.go:181] (0x3086070) (3) Data frame sent\nI1014 13:52:51.361069 418 log.go:181] (0x29ce070) (5) Data frame sent\nI1014 13:52:51.365930 418 log.go:181] (0x281e000) Data frame received for 3\nI1014 13:52:51.366002 418 log.go:181] (0x3086070) (3) Data frame handling\nI1014 13:52:51.366124 418 log.go:181] (0x3086070) (3) Data frame sent\nI1014 13:52:51.366843 418 log.go:181] (0x281e000) Data frame received for 3\nI1014 13:52:51.366956 418 log.go:181] (0x3086070) (3) Data frame handling\nI1014 13:52:51.367022 418 log.go:181] (0x281e000) Data frame received for 5\nI1014 13:52:51.367091 418 log.go:181] (0x29ce070) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.242.236:80/\nI1014 13:52:51.367145 418 log.go:181] (0x3086070) (3) Data frame sent\nI1014 13:52:51.367210 418 log.go:181] (0x29ce070) (5) Data frame sent\nI1014 13:52:51.370396 418 log.go:181] (0x281e000) Data frame received for 3\nI1014 13:52:51.370463 418 log.go:181] (0x3086070) (3) Data frame handling\nI1014 13:52:51.370537 418 log.go:181] (0x3086070) (3) Data frame sent\nI1014 13:52:51.371116 418 log.go:181] (0x281e000) Data frame received for 5\nI1014 13:52:51.371187 418 log.go:181] (0x29ce070) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.242.236:80/\nI1014 13:52:51.371279 418 log.go:181] (0x281e000) Data frame received for 3\nI1014 13:52:51.371378 418 log.go:181] (0x3086070) (3) Data frame handling\nI1014 13:52:51.371461 418 log.go:181] (0x29ce070) (5) Data frame sent\nI1014 13:52:51.371541 418 log.go:181] (0x3086070) (3) Data frame sent\nI1014 13:52:51.375327 418 log.go:181] (0x281e000) Data frame received for 3\nI1014 13:52:51.375431 418 log.go:181] (0x3086070) (3) Data frame handling\nI1014 13:52:51.375557 418 log.go:181] (0x3086070) (3) Data frame sent\nI1014 13:52:51.376097 418 log.go:181] (0x281e000) Data frame received for 5\nI1014 13:52:51.376199 418 log.go:181] (0x29ce070) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.242.236:80/\nI1014 13:52:51.376282 418 log.go:181] (0x281e000) Data frame received for 3\nI1014 13:52:51.376358 418 log.go:181] (0x3086070) (3) Data frame handling\nI1014 13:52:51.376429 418 log.go:181] (0x29ce070) (5) Data frame sent\nI1014 13:52:51.376519 418 log.go:181] (0x3086070) (3) Data frame sent\nI1014 13:52:51.380779 418 log.go:181] (0x281e000) Data frame received for 3\nI1014 13:52:51.381014 418 log.go:181] (0x3086070) (3) Data frame handling\nI1014 13:52:51.381180 418 log.go:181] (0x3086070) (3) Data frame sent\nI1014 13:52:51.381809 418 log.go:181] (0x281e000) Data frame received for 5\nI1014 13:52:51.381925 418 log.go:181] (0x281e000) Data frame received for 3\nI1014 13:52:51.382228 418 log.go:181] (0x3086070) (3) Data frame handling\nI1014 13:52:51.382393 418 log.go:181] (0x29ce070) (5) Data frame handling\nI1014 13:52:51.382533 418 log.go:181] (0x29ce070) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.242.236:80/\nI1014 13:52:51.382681 418 log.go:181] (0x3086070) (3) Data frame sent\nI1014 13:52:51.387527 418 log.go:181] (0x281e000) Data frame received for 3\nI1014 13:52:51.387648 418 log.go:181] (0x3086070) (3) Data frame handling\nI1014 13:52:51.387809 418 log.go:181] (0x3086070) (3) Data frame sent\nI1014 13:52:51.388528 418 log.go:181] (0x281e000) Data frame received for 3\nI1014 13:52:51.388696 418 log.go:181] (0x3086070) (3) Data frame handling\nI1014 13:52:51.388809 418 log.go:181] (0x281e000) Data frame received for 5\nI1014 13:52:51.389048 418 log.go:181] (0x29ce070) (5) Data frame handling\nI1014 13:52:51.390558 418 log.go:181] (0x281e000) Data frame received for 1\nI1014 13:52:51.390670 418 log.go:181] (0x281e070) (1) Data frame handling\nI1014 13:52:51.390815 418 log.go:181] (0x281e070) (1) Data frame sent\nI1014 13:52:51.391361 418 log.go:181] (0x281e000) (0x281e070) Stream removed, broadcasting: 1\nI1014 13:52:51.394032 418 log.go:181] (0x281e000) Go away received\nI1014 13:52:51.395888 418 log.go:181] (0x281e000) (0x281e070) Stream removed, broadcasting: 1\nI1014 13:52:51.396422 418 log.go:181] (0x281e000) (0x3086070) Stream removed, broadcasting: 3\nI1014 13:52:51.396681 418 log.go:181] (0x281e000) (0x29ce070) Stream removed, broadcasting: 5\n" Oct 14 13:52:51.409: INFO: stdout: "\naffinity-clusterip-6xrb9\naffinity-clusterip-6xrb9\naffinity-clusterip-6xrb9\naffinity-clusterip-6xrb9\naffinity-clusterip-6xrb9\naffinity-clusterip-6xrb9\naffinity-clusterip-6xrb9\naffinity-clusterip-6xrb9\naffinity-clusterip-6xrb9\naffinity-clusterip-6xrb9\naffinity-clusterip-6xrb9\naffinity-clusterip-6xrb9\naffinity-clusterip-6xrb9\naffinity-clusterip-6xrb9\naffinity-clusterip-6xrb9\naffinity-clusterip-6xrb9" Oct 14 13:52:51.409: INFO: Received response from host: affinity-clusterip-6xrb9 Oct 14 13:52:51.409: INFO: Received response from host: affinity-clusterip-6xrb9 Oct 14 13:52:51.409: INFO: Received response from host: affinity-clusterip-6xrb9 Oct 14 13:52:51.409: INFO: Received response from host: affinity-clusterip-6xrb9 Oct 14 13:52:51.409: INFO: Received response from host: affinity-clusterip-6xrb9 Oct 14 13:52:51.409: INFO: Received response from host: affinity-clusterip-6xrb9 Oct 14 13:52:51.409: INFO: Received response from host: affinity-clusterip-6xrb9 Oct 14 13:52:51.409: INFO: Received response from host: affinity-clusterip-6xrb9 Oct 14 13:52:51.409: INFO: Received response from host: affinity-clusterip-6xrb9 Oct 14 13:52:51.409: INFO: Received response from host: affinity-clusterip-6xrb9 Oct 14 13:52:51.409: INFO: Received response from host: affinity-clusterip-6xrb9 Oct 14 13:52:51.409: INFO: Received response from host: affinity-clusterip-6xrb9 Oct 14 13:52:51.409: INFO: Received response from host: affinity-clusterip-6xrb9 Oct 14 13:52:51.409: INFO: Received response from host: affinity-clusterip-6xrb9 Oct 14 13:52:51.409: INFO: Received response from host: affinity-clusterip-6xrb9 Oct 14 13:52:51.409: INFO: Received response from host: affinity-clusterip-6xrb9 Oct 14 13:52:51.410: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-4226, will wait for the garbage collector to delete the pods Oct 14 13:52:51.505: INFO: Deleting ReplicationController affinity-clusterip took: 8.900125ms Oct 14 13:52:51.906: INFO: Terminating ReplicationController affinity-clusterip pods took: 400.990201ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:53:05.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4226" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:30.334 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":303,"completed":49,"skipped":719,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:53:05.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Oct 14 13:53:05.921: INFO: Waiting up to 1m0s for all nodes to be ready Oct 14 13:54:06.010: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:54:06.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:487 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. Oct 14 13:54:10.201: INFO: found a healthy node: latest-worker [It] runs ReplicaSets to verify preemption running path [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 13:54:24.475: INFO: pods created so far: [1 1 1] Oct 14 13:54:24.476: INFO: length of pods created so far: 3 Oct 14 13:54:36.495: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:54:43.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-1455" for this suite. [AfterEach] PreemptionExecutionPath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:461 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:54:43.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-4449" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:97.925 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:450 runs ReplicaSets to verify preemption running path [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":303,"completed":50,"skipped":738,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] PodTemplates /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:54:43.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-node] PodTemplates /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:54:43.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-5336" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":303,"completed":51,"skipped":760,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:54:43.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:54:48.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-837" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":303,"completed":52,"skipped":773,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:54:48.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's args Oct 14 13:54:48.560: INFO: Waiting up to 5m0s for pod "var-expansion-d79065f8-3e15-46cb-ad8e-bc90c9c7be7c" in namespace "var-expansion-6142" to be "Succeeded or Failed" Oct 14 13:54:48.569: INFO: Pod "var-expansion-d79065f8-3e15-46cb-ad8e-bc90c9c7be7c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.670643ms Oct 14 13:54:50.595: INFO: Pod "var-expansion-d79065f8-3e15-46cb-ad8e-bc90c9c7be7c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03495395s Oct 14 13:54:52.603: INFO: Pod "var-expansion-d79065f8-3e15-46cb-ad8e-bc90c9c7be7c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042528299s Oct 14 13:54:54.609: INFO: Pod "var-expansion-d79065f8-3e15-46cb-ad8e-bc90c9c7be7c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.04925937s STEP: Saw pod success Oct 14 13:54:54.609: INFO: Pod "var-expansion-d79065f8-3e15-46cb-ad8e-bc90c9c7be7c" satisfied condition "Succeeded or Failed" Oct 14 13:54:54.614: INFO: Trying to get logs from node latest-worker pod var-expansion-d79065f8-3e15-46cb-ad8e-bc90c9c7be7c container dapi-container: STEP: delete the pod Oct 14 13:54:54.686: INFO: Waiting for pod var-expansion-d79065f8-3e15-46cb-ad8e-bc90c9c7be7c to disappear Oct 14 13:54:54.691: INFO: Pod var-expansion-d79065f8-3e15-46cb-ad8e-bc90c9c7be7c no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:54:54.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6142" for this suite. • [SLOW TEST:6.441 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":303,"completed":53,"skipped":784,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:54:54.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Oct 14 13:55:02.955: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Oct 14 13:55:02.960: INFO: Pod pod-with-prestop-exec-hook still exists Oct 14 13:55:04.961: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Oct 14 13:55:04.970: INFO: Pod pod-with-prestop-exec-hook still exists Oct 14 13:55:06.961: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Oct 14 13:55:06.969: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:55:06.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2759" for this suite. • [SLOW TEST:12.288 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":303,"completed":54,"skipped":792,"failed":0} SSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:55:06.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] deployment should support rollover [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 13:55:07.085: INFO: Pod name rollover-pod: Found 0 pods out of 1 Oct 14 13:55:12.092: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Oct 14 13:55:12.093: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Oct 14 13:55:14.101: INFO: Creating deployment "test-rollover-deployment" Oct 14 13:55:14.117: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Oct 14 13:55:16.185: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Oct 14 13:55:16.197: INFO: Ensure that both replica sets have 1 created replica Oct 14 13:55:16.208: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Oct 14 13:55:16.220: INFO: Updating deployment test-rollover-deployment Oct 14 13:55:16.220: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Oct 14 13:55:18.274: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Oct 14 13:55:18.462: INFO: Make sure deployment "test-rollover-deployment" is complete Oct 14 13:55:18.497: INFO: all replica sets need to contain the pod-template-hash label Oct 14 13:55:18.497: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738280514, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738280514, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738280516, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738280514, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 14 13:55:20.511: INFO: all replica sets need to contain the pod-template-hash label Oct 14 13:55:20.512: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738280514, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738280514, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738280520, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738280514, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 14 13:55:22.514: INFO: all replica sets need to contain the pod-template-hash label Oct 14 13:55:22.514: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738280514, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738280514, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738280520, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738280514, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 14 13:55:24.521: INFO: all replica sets need to contain the pod-template-hash label Oct 14 13:55:24.521: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738280514, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738280514, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738280520, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738280514, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 14 13:55:26.515: INFO: all replica sets need to contain the pod-template-hash label Oct 14 13:55:26.515: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738280514, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738280514, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738280520, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738280514, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 14 13:55:28.515: INFO: all replica sets need to contain the pod-template-hash label Oct 14 13:55:28.516: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738280514, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738280514, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738280520, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738280514, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 14 13:55:30.543: INFO: Oct 14 13:55:30.543: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Oct 14 13:55:30.698: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-6440 /apis/apps/v1/namespaces/deployment-6440/deployments/test-rollover-deployment af4d0040-ca11-419f-ae1a-94f6927af367 1130834 2 2020-10-14 13:55:14 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-10-14 13:55:16 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-10-14 13:55:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x7d2c5c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-10-14 13:55:14 +0000 UTC,LastTransitionTime:2020-10-14 13:55:14 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-5797c7764" has successfully progressed.,LastUpdateTime:2020-10-14 13:55:30 +0000 UTC,LastTransitionTime:2020-10-14 13:55:14 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Oct 14 13:55:30.707: INFO: New ReplicaSet "test-rollover-deployment-5797c7764" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-5797c7764 deployment-6440 /apis/apps/v1/namespaces/deployment-6440/replicasets/test-rollover-deployment-5797c7764 379c862f-16e3-4ba8-a7b8-eeb3346030e9 1130823 2 2020-10-14 13:55:16 +0000 UTC map[name:rollover-pod pod-template-hash:5797c7764] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment af4d0040-ca11-419f-ae1a-94f6927af367 0x9809ea0 0x9809ea1}] [] [{kube-controller-manager Update apps/v1 2020-10-14 13:55:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"af4d0040-ca11-419f-ae1a-94f6927af367\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5797c7764,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:5797c7764] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x9809f18 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Oct 14 13:55:30.707: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Oct 14 13:55:30.708: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-6440 /apis/apps/v1/namespaces/deployment-6440/replicasets/test-rollover-controller 9257017c-bc82-41de-8f34-2200b6609144 1130833 2 2020-10-14 13:55:07 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment af4d0040-ca11-419f-ae1a-94f6927af367 0x9809d97 0x9809d98}] [] [{e2e.test Update apps/v1 2020-10-14 13:55:07 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-10-14 13:55:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"af4d0040-ca11-419f-ae1a-94f6927af367\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0x9809e38 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Oct 14 13:55:30.709: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-78bc8b888c deployment-6440 /apis/apps/v1/namespaces/deployment-6440/replicasets/test-rollover-deployment-78bc8b888c f13d4a7d-43a6-4b8f-a9e9-5108b7516833 1130772 2 2020-10-14 13:55:14 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment af4d0040-ca11-419f-ae1a-94f6927af367 0x9809f87 0x9809f88}] [] [{kube-controller-manager Update apps/v1 2020-10-14 13:55:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"af4d0040-ca11-419f-ae1a-94f6927af367\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78bc8b888c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x68940b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Oct 14 13:55:30.716: INFO: Pod "test-rollover-deployment-5797c7764-kvlkd" is available: &Pod{ObjectMeta:{test-rollover-deployment-5797c7764-kvlkd test-rollover-deployment-5797c7764- deployment-6440 /api/v1/namespaces/deployment-6440/pods/test-rollover-deployment-5797c7764-kvlkd 7cb32209-e9ff-48e5-8c9c-92f5e94b3192 1130789 0 2020-10-14 13:55:16 +0000 UTC map[name:rollover-pod pod-template-hash:5797c7764] map[] [{apps/v1 ReplicaSet test-rollover-deployment-5797c7764 379c862f-16e3-4ba8-a7b8-eeb3346030e9 0x97bf700 0x97bf701}] [] [{kube-controller-manager Update v1 2020-10-14 13:55:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"379c862f-16e3-4ba8-a7b8-eeb3346030e9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-14 13:55:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.213\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-f89v5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-f89v5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-f89v5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:55:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:55:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:55:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 13:55:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.2.213,StartTime:2020-10-14 13:55:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-14 13:55:19 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://f55cca639eae2c91a2dbc999047e99d1311aa071461e64674d39437e1a46d23d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.213,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:55:30.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6440" for this suite. • [SLOW TEST:23.737 seconds] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":303,"completed":55,"skipped":798,"failed":0} S ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] LimitRange /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:55:30.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Oct 14 13:55:30.842: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Oct 14 13:55:30.889: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Oct 14 13:55:30.890: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Oct 14 13:55:31.089: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Oct 14 13:55:31.089: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Oct 14 13:55:31.223: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Oct 14 13:55:31.223: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Oct 14 13:55:38.448: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:55:38.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-8332" for this suite. • [SLOW TEST:7.775 seconds] [sig-scheduling] LimitRange /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":303,"completed":56,"skipped":799,"failed":0} SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:55:38.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Oct 14 13:55:48.857: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Oct 14 13:55:48.884: INFO: Pod pod-with-prestop-http-hook still exists Oct 14 13:55:50.885: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Oct 14 13:55:50.893: INFO: Pod pod-with-prestop-http-hook still exists Oct 14 13:55:52.885: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Oct 14 13:55:52.893: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:55:52.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9015" for this suite. • [SLOW TEST:14.406 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":303,"completed":57,"skipped":802,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:55:52.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:56:28.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8412" for this suite. • [SLOW TEST:35.251 seconds] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when starting a container that exits /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":303,"completed":58,"skipped":831,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:56:28.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating pod Oct 14 13:56:32.296: INFO: Pod pod-hostip-ca42f02d-2f79-4d34-8a60-ae0c492f8b46 has hostIP: 172.18.0.15 [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:56:32.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9204" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":303,"completed":59,"skipped":857,"failed":0} S ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:56:32.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:56:32.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-9765" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":303,"completed":60,"skipped":858,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:56:32.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 14 13:56:32.530: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2a2f7c40-3036-450d-b43d-9080fdd47939" in namespace "projected-6271" to be "Succeeded or Failed" Oct 14 13:56:32.543: INFO: Pod "downwardapi-volume-2a2f7c40-3036-450d-b43d-9080fdd47939": Phase="Pending", Reason="", readiness=false. Elapsed: 12.882114ms Oct 14 13:56:34.551: INFO: Pod "downwardapi-volume-2a2f7c40-3036-450d-b43d-9080fdd47939": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021037029s Oct 14 13:56:36.560: INFO: Pod "downwardapi-volume-2a2f7c40-3036-450d-b43d-9080fdd47939": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029509379s STEP: Saw pod success Oct 14 13:56:36.560: INFO: Pod "downwardapi-volume-2a2f7c40-3036-450d-b43d-9080fdd47939" satisfied condition "Succeeded or Failed" Oct 14 13:56:36.565: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-2a2f7c40-3036-450d-b43d-9080fdd47939 container client-container: STEP: delete the pod Oct 14 13:56:36.820: INFO: Waiting for pod downwardapi-volume-2a2f7c40-3036-450d-b43d-9080fdd47939 to disappear Oct 14 13:56:36.833: INFO: Pod downwardapi-volume-2a2f7c40-3036-450d-b43d-9080fdd47939 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:56:36.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6271" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":303,"completed":61,"skipped":896,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:56:36.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 14 13:56:45.139: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 14 13:56:47.456: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738280605, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738280605, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738280605, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738280605, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 14 13:56:50.511: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Oct 14 13:56:50.562: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:56:50.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9477" for this suite. STEP: Destroying namespace "webhook-9477-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.890 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":303,"completed":62,"skipped":912,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:56:50.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 14 13:56:59.316: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 14 13:57:01.407: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738280619, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738280619, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738280619, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738280619, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 14 13:57:04.472: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 13:57:04.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7568-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:57:05.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8279" for this suite. STEP: Destroying namespace "webhook-8279-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.065 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":303,"completed":63,"skipped":950,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:57:05.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 13:57:05.915: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:57:07.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5213" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":303,"completed":64,"skipped":953,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:57:07.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 14 13:57:20.302: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 14 13:57:22.456: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738280640, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738280640, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738280640, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738280640, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 14 13:57:25.492: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:57:25.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7740" for this suite. STEP: Destroying namespace "webhook-7740-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.475 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":303,"completed":65,"skipped":965,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:57:25.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-81c2b6f6-fcc0-499d-bccc-2161e9e5d2e2 STEP: Creating a pod to test consume configMaps Oct 14 13:57:25.737: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8253508c-d247-405e-841d-bf78be63bc3b" in namespace "projected-9363" to be "Succeeded or Failed" Oct 14 13:57:25.751: INFO: Pod "pod-projected-configmaps-8253508c-d247-405e-841d-bf78be63bc3b": Phase="Pending", Reason="", readiness=false. Elapsed: 13.640076ms Oct 14 13:57:27.757: INFO: Pod "pod-projected-configmaps-8253508c-d247-405e-841d-bf78be63bc3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020134962s Oct 14 13:57:29.764: INFO: Pod "pod-projected-configmaps-8253508c-d247-405e-841d-bf78be63bc3b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027261168s STEP: Saw pod success Oct 14 13:57:29.765: INFO: Pod "pod-projected-configmaps-8253508c-d247-405e-841d-bf78be63bc3b" satisfied condition "Succeeded or Failed" Oct 14 13:57:29.770: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-8253508c-d247-405e-841d-bf78be63bc3b container projected-configmap-volume-test: STEP: delete the pod Oct 14 13:57:29.861: INFO: Waiting for pod pod-projected-configmaps-8253508c-d247-405e-841d-bf78be63bc3b to disappear Oct 14 13:57:29.896: INFO: Pod pod-projected-configmaps-8253508c-d247-405e-841d-bf78be63bc3b no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:57:29.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9363" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":66,"skipped":985,"failed":0} ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:57:29.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Oct 14 13:57:29.979: INFO: >>> kubeConfig: /root/.kube/config Oct 14 13:57:50.426: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:59:02.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6168" for this suite. • [SLOW TEST:92.331 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":303,"completed":67,"skipped":985,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:59:02.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 14 13:59:17.020: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 14 13:59:19.158: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738280757, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738280757, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738280757, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738280756, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 14 13:59:21.167: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738280757, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738280757, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738280757, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738280756, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 14 13:59:24.225: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 13:59:24.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2391-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 13:59:25.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9179" for this suite. STEP: Destroying namespace "webhook-9179-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:23.376 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":303,"completed":68,"skipped":987,"failed":0} SSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 13:59:25.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W1014 13:59:26.579021 11 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 14 14:00:28.611: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:00:28.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8484" for this suite. • [SLOW TEST:63.016 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":303,"completed":69,"skipped":990,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:00:28.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on node default medium Oct 14 14:00:28.731: INFO: Waiting up to 5m0s for pod "pod-71ebb80c-769b-42ca-baeb-c1d5d41b3a8e" in namespace "emptydir-3169" to be "Succeeded or Failed" Oct 14 14:00:28.748: INFO: Pod "pod-71ebb80c-769b-42ca-baeb-c1d5d41b3a8e": Phase="Pending", Reason="", readiness=false. Elapsed: 17.183344ms Oct 14 14:00:30.755: INFO: Pod "pod-71ebb80c-769b-42ca-baeb-c1d5d41b3a8e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023725764s Oct 14 14:00:32.763: INFO: Pod "pod-71ebb80c-769b-42ca-baeb-c1d5d41b3a8e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032317551s STEP: Saw pod success Oct 14 14:00:32.764: INFO: Pod "pod-71ebb80c-769b-42ca-baeb-c1d5d41b3a8e" satisfied condition "Succeeded or Failed" Oct 14 14:00:32.770: INFO: Trying to get logs from node latest-worker pod pod-71ebb80c-769b-42ca-baeb-c1d5d41b3a8e container test-container: STEP: delete the pod Oct 14 14:00:32.867: INFO: Waiting for pod pod-71ebb80c-769b-42ca-baeb-c1d5d41b3a8e to disappear Oct 14 14:00:33.110: INFO: Pod pod-71ebb80c-769b-42ca-baeb-c1d5d41b3a8e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:00:33.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3169" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":70,"skipped":1006,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:00:33.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl run pod /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1545 [It] should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Oct 14 14:00:33.264: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-2182' Oct 14 14:00:37.389: INFO: stderr: "" Oct 14 14:00:37.389: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1550 Oct 14 14:00:37.407: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-2182' Oct 14 14:00:45.641: INFO: stderr: "" Oct 14 14:00:45.641: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:00:45.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2182" for this suite. • [SLOW TEST:12.527 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1541 should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":303,"completed":71,"skipped":1012,"failed":0} [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected combined /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:00:45.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-projected-all-test-volume-4d2b2c39-91b2-4122-b211-b67dec606df7 STEP: Creating secret with name secret-projected-all-test-volume-ab02dad2-0266-49c3-b0a0-79ba8c447bb2 STEP: Creating a pod to test Check all projections for projected volume plugin Oct 14 14:00:45.775: INFO: Waiting up to 5m0s for pod "projected-volume-b70786f1-059b-4c34-9e5c-e2f52d20ddf3" in namespace "projected-8361" to be "Succeeded or Failed" Oct 14 14:00:45.823: INFO: Pod "projected-volume-b70786f1-059b-4c34-9e5c-e2f52d20ddf3": Phase="Pending", Reason="", readiness=false. Elapsed: 48.150898ms Oct 14 14:00:47.830: INFO: Pod "projected-volume-b70786f1-059b-4c34-9e5c-e2f52d20ddf3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05505313s Oct 14 14:00:49.838: INFO: Pod "projected-volume-b70786f1-059b-4c34-9e5c-e2f52d20ddf3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063022846s STEP: Saw pod success Oct 14 14:00:49.838: INFO: Pod "projected-volume-b70786f1-059b-4c34-9e5c-e2f52d20ddf3" satisfied condition "Succeeded or Failed" Oct 14 14:00:49.844: INFO: Trying to get logs from node latest-worker pod projected-volume-b70786f1-059b-4c34-9e5c-e2f52d20ddf3 container projected-all-volume-test: STEP: delete the pod Oct 14 14:00:49.908: INFO: Waiting for pod projected-volume-b70786f1-059b-4c34-9e5c-e2f52d20ddf3 to disappear Oct 14 14:00:49.915: INFO: Pod projected-volume-b70786f1-059b-4c34-9e5c-e2f52d20ddf3 no longer exists [AfterEach] [sig-storage] Projected combined /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:00:49.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8361" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":303,"completed":72,"skipped":1012,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:00:49.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W1014 14:01:31.219474 11 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 14 14:02:33.247: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Oct 14 14:02:33.247: INFO: Deleting pod "simpletest.rc-cz9jl" in namespace "gc-374" Oct 14 14:02:33.284: INFO: Deleting pod "simpletest.rc-d9rj2" in namespace "gc-374" Oct 14 14:02:33.363: INFO: Deleting pod "simpletest.rc-mc8n2" in namespace "gc-374" Oct 14 14:02:33.407: INFO: Deleting pod "simpletest.rc-nnsgg" in namespace "gc-374" Oct 14 14:02:33.755: INFO: Deleting pod "simpletest.rc-pdbb8" in namespace "gc-374" Oct 14 14:02:34.294: INFO: Deleting pod "simpletest.rc-qh8b7" in namespace "gc-374" Oct 14 14:02:34.504: INFO: Deleting pod "simpletest.rc-qrpnj" in namespace "gc-374" Oct 14 14:02:34.846: INFO: Deleting pod "simpletest.rc-vdh6q" in namespace "gc-374" Oct 14 14:02:35.316: INFO: Deleting pod "simpletest.rc-xk5bn" in namespace "gc-374" Oct 14 14:02:35.474: INFO: Deleting pod "simpletest.rc-zdd9l" in namespace "gc-374" [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:02:36.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-374" for this suite. • [SLOW TEST:106.544 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":303,"completed":73,"skipped":1043,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:02:36.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W1014 14:02:39.358443 11 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 14 14:03:41.419: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:03:41.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3023" for this suite. • [SLOW TEST:64.927 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":303,"completed":74,"skipped":1054,"failed":0} S ------------------------------ [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:03:41.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-532 STEP: creating service affinity-nodeport in namespace services-532 STEP: creating replication controller affinity-nodeport in namespace services-532 I1014 14:03:41.645846 11 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-532, replica count: 3 I1014 14:03:44.697706 11 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1014 14:03:47.698447 11 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 14 14:03:47.829: INFO: Creating new exec pod Oct 14 14:03:52.901: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-532 execpod-affinitylqxxj -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport 80' Oct 14 14:03:54.456: INFO: stderr: "I1014 14:03:54.344510 479 log.go:181] (0x254e000) (0x254e070) Create stream\nI1014 14:03:54.348994 479 log.go:181] (0x254e000) (0x254e070) Stream added, broadcasting: 1\nI1014 14:03:54.365524 479 log.go:181] (0x254e000) Reply frame received for 1\nI1014 14:03:54.366610 479 log.go:181] (0x254e000) (0x3026070) Create stream\nI1014 14:03:54.366740 479 log.go:181] (0x254e000) (0x3026070) Stream added, broadcasting: 3\nI1014 14:03:54.368600 479 log.go:181] (0x254e000) Reply frame received for 3\nI1014 14:03:54.369087 479 log.go:181] (0x254e000) (0x2d120e0) Create stream\nI1014 14:03:54.369241 479 log.go:181] (0x254e000) (0x2d120e0) Stream added, broadcasting: 5\nI1014 14:03:54.371132 479 log.go:181] (0x254e000) Reply frame received for 5\nI1014 14:03:54.437881 479 log.go:181] (0x254e000) Data frame received for 5\nI1014 14:03:54.438191 479 log.go:181] (0x2d120e0) (5) Data frame handling\nI1014 14:03:54.438625 479 log.go:181] (0x2d120e0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport 80\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\nI1014 14:03:54.439836 479 log.go:181] (0x254e000) Data frame received for 5\nI1014 14:03:54.439987 479 log.go:181] (0x254e000) Data frame received for 1\nI1014 14:03:54.440142 479 log.go:181] (0x254e000) Data frame received for 3\nI1014 14:03:54.440321 479 log.go:181] (0x3026070) (3) Data frame handling\nI1014 14:03:54.440419 479 log.go:181] (0x254e070) (1) Data frame handling\nI1014 14:03:54.440527 479 log.go:181] (0x254e070) (1) Data frame sent\nI1014 14:03:54.440764 479 log.go:181] (0x2d120e0) (5) Data frame handling\nI1014 14:03:54.441710 479 log.go:181] (0x254e000) (0x254e070) Stream removed, broadcasting: 1\nI1014 14:03:54.443902 479 log.go:181] (0x254e000) Go away received\nI1014 14:03:54.446992 479 log.go:181] (0x254e000) (0x254e070) Stream removed, broadcasting: 1\nI1014 14:03:54.447222 479 log.go:181] (0x254e000) (0x3026070) Stream removed, broadcasting: 3\nI1014 14:03:54.447440 479 log.go:181] (0x254e000) (0x2d120e0) Stream removed, broadcasting: 5\n" Oct 14 14:03:54.457: INFO: stdout: "" Oct 14 14:03:54.462: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-532 execpod-affinitylqxxj -- /bin/sh -x -c nc -zv -t -w 2 10.108.185.230 80' Oct 14 14:03:55.959: INFO: stderr: "I1014 14:03:55.851589 499 log.go:181] (0x2f2e150) (0x2f2e1c0) Create stream\nI1014 14:03:55.855310 499 log.go:181] (0x2f2e150) (0x2f2e1c0) Stream added, broadcasting: 1\nI1014 14:03:55.865899 499 log.go:181] (0x2f2e150) Reply frame received for 1\nI1014 14:03:55.866330 499 log.go:181] (0x2f2e150) (0x2f2e380) Create stream\nI1014 14:03:55.866393 499 log.go:181] (0x2f2e150) (0x2f2e380) Stream added, broadcasting: 3\nI1014 14:03:55.868179 499 log.go:181] (0x2f2e150) Reply frame received for 3\nI1014 14:03:55.868720 499 log.go:181] (0x2f2e150) (0x27d7c00) Create stream\nI1014 14:03:55.868962 499 log.go:181] (0x2f2e150) (0x27d7c00) Stream added, broadcasting: 5\nI1014 14:03:55.870618 499 log.go:181] (0x2f2e150) Reply frame received for 5\nI1014 14:03:55.941165 499 log.go:181] (0x2f2e150) Data frame received for 3\nI1014 14:03:55.941797 499 log.go:181] (0x2f2e150) Data frame received for 5\nI1014 14:03:55.942189 499 log.go:181] (0x27d7c00) (5) Data frame handling\nI1014 14:03:55.942560 499 log.go:181] (0x2f2e380) (3) Data frame handling\nI1014 14:03:55.943268 499 log.go:181] (0x2f2e150) Data frame received for 1\nI1014 14:03:55.943392 499 log.go:181] (0x2f2e1c0) (1) Data frame handling\nI1014 14:03:55.943669 499 log.go:181] (0x27d7c00) (5) Data frame sent\nI1014 14:03:55.944097 499 log.go:181] (0x2f2e1c0) (1) Data frame sent\n+ nc -zv -t -w 2 10.108.185.230 80\nConnection to 10.108.185.230 80 port [tcp/http] succeeded!\nI1014 14:03:55.944500 499 log.go:181] (0x2f2e150) Data frame received for 5\nI1014 14:03:55.944653 499 log.go:181] (0x27d7c00) (5) Data frame handling\nI1014 14:03:55.947472 499 log.go:181] (0x2f2e150) (0x2f2e1c0) Stream removed, broadcasting: 1\nI1014 14:03:55.949429 499 log.go:181] (0x2f2e150) Go away received\nI1014 14:03:55.951239 499 log.go:181] (0x2f2e150) (0x2f2e1c0) Stream removed, broadcasting: 1\nI1014 14:03:55.951432 499 log.go:181] (0x2f2e150) (0x2f2e380) Stream removed, broadcasting: 3\nI1014 14:03:55.951577 499 log.go:181] (0x2f2e150) (0x27d7c00) Stream removed, broadcasting: 5\n" Oct 14 14:03:55.960: INFO: stdout: "" Oct 14 14:03:55.961: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-532 execpod-affinitylqxxj -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.15 31804' Oct 14 14:03:57.525: INFO: stderr: "I1014 14:03:57.394069 519 log.go:181] (0x293e000) (0x293e070) Create stream\nI1014 14:03:57.396491 519 log.go:181] (0x293e000) (0x293e070) Stream added, broadcasting: 1\nI1014 14:03:57.407759 519 log.go:181] (0x293e000) Reply frame received for 1\nI1014 14:03:57.408642 519 log.go:181] (0x293e000) (0x266c2a0) Create stream\nI1014 14:03:57.408753 519 log.go:181] (0x293e000) (0x266c2a0) Stream added, broadcasting: 3\nI1014 14:03:57.410467 519 log.go:181] (0x293e000) Reply frame received for 3\nI1014 14:03:57.410765 519 log.go:181] (0x293e000) (0x266c460) Create stream\nI1014 14:03:57.410824 519 log.go:181] (0x293e000) (0x266c460) Stream added, broadcasting: 5\nI1014 14:03:57.412219 519 log.go:181] (0x293e000) Reply frame received for 5\nI1014 14:03:57.509316 519 log.go:181] (0x293e000) Data frame received for 3\nI1014 14:03:57.509616 519 log.go:181] (0x293e000) Data frame received for 5\nI1014 14:03:57.509745 519 log.go:181] (0x266c460) (5) Data frame handling\nI1014 14:03:57.509948 519 log.go:181] (0x266c2a0) (3) Data frame handling\nI1014 14:03:57.510237 519 log.go:181] (0x293e000) Data frame received for 1\nI1014 14:03:57.510381 519 log.go:181] (0x293e070) (1) Data frame handling\nI1014 14:03:57.510657 519 log.go:181] (0x293e070) (1) Data frame sent\nI1014 14:03:57.510758 519 log.go:181] (0x266c460) (5) Data frame sent\nI1014 14:03:57.511010 519 log.go:181] (0x293e000) Data frame received for 5\nI1014 14:03:57.511101 519 log.go:181] (0x266c460) (5) Data frame handling\nI1014 14:03:57.511575 519 log.go:181] (0x293e000) (0x293e070) Stream removed, broadcasting: 1\n+ nc -zv -t -w 2 172.18.0.15 31804\nConnection to 172.18.0.15 31804 port [tcp/31804] succeeded!\nI1014 14:03:57.514491 519 log.go:181] (0x293e000) Go away received\nI1014 14:03:57.516462 519 log.go:181] (0x293e000) (0x293e070) Stream removed, broadcasting: 1\nI1014 14:03:57.516928 519 log.go:181] (0x293e000) (0x266c2a0) Stream removed, broadcasting: 3\nI1014 14:03:57.517177 519 log.go:181] (0x293e000) (0x266c460) Stream removed, broadcasting: 5\n" Oct 14 14:03:57.526: INFO: stdout: "" Oct 14 14:03:57.527: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-532 execpod-affinitylqxxj -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 31804' Oct 14 14:03:59.047: INFO: stderr: "I1014 14:03:58.916783 539 log.go:181] (0x2820310) (0x28204d0) Create stream\nI1014 14:03:58.919785 539 log.go:181] (0x2820310) (0x28204d0) Stream added, broadcasting: 1\nI1014 14:03:58.928413 539 log.go:181] (0x2820310) Reply frame received for 1\nI1014 14:03:58.929511 539 log.go:181] (0x2820310) (0x2820690) Create stream\nI1014 14:03:58.929624 539 log.go:181] (0x2820310) (0x2820690) Stream added, broadcasting: 3\nI1014 14:03:58.931539 539 log.go:181] (0x2820310) Reply frame received for 3\nI1014 14:03:58.931897 539 log.go:181] (0x2820310) (0x2d2a070) Create stream\nI1014 14:03:58.931988 539 log.go:181] (0x2820310) (0x2d2a070) Stream added, broadcasting: 5\nI1014 14:03:58.934386 539 log.go:181] (0x2820310) Reply frame received for 5\nI1014 14:03:59.025663 539 log.go:181] (0x2820310) Data frame received for 3\nI1014 14:03:59.026121 539 log.go:181] (0x2820690) (3) Data frame handling\nI1014 14:03:59.026307 539 log.go:181] (0x2820310) Data frame received for 5\nI1014 14:03:59.026547 539 log.go:181] (0x2d2a070) (5) Data frame handling\nI1014 14:03:59.026815 539 log.go:181] (0x2820310) Data frame received for 1\nI1014 14:03:59.026998 539 log.go:181] (0x28204d0) (1) Data frame handling\nI1014 14:03:59.028342 539 log.go:181] (0x28204d0) (1) Data frame sent\nI1014 14:03:59.028545 539 log.go:181] (0x2d2a070) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.14 31804\nI1014 14:03:59.029249 539 log.go:181] (0x2820310) Data frame received for 5\nI1014 14:03:59.029344 539 log.go:181] (0x2d2a070) (5) Data frame handling\nConnection to 172.18.0.14 31804 port [tcp/31804] succeeded!\nI1014 14:03:59.030379 539 log.go:181] (0x2d2a070) (5) Data frame sent\nI1014 14:03:59.030532 539 log.go:181] (0x2820310) Data frame received for 5\nI1014 14:03:59.030613 539 log.go:181] (0x2d2a070) (5) Data frame handling\nI1014 14:03:59.031323 539 log.go:181] (0x2820310) (0x28204d0) Stream removed, broadcasting: 1\nI1014 14:03:59.033839 539 log.go:181] (0x2820310) Go away received\nI1014 14:03:59.035865 539 log.go:181] (0x2820310) (0x28204d0) Stream removed, broadcasting: 1\nI1014 14:03:59.036326 539 log.go:181] (0x2820310) (0x2820690) Stream removed, broadcasting: 3\nI1014 14:03:59.036501 539 log.go:181] (0x2820310) (0x2d2a070) Stream removed, broadcasting: 5\n" Oct 14 14:03:59.048: INFO: stdout: "" Oct 14 14:03:59.048: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-532 execpod-affinitylqxxj -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.15:31804/ ; done' Oct 14 14:04:00.630: INFO: stderr: "I1014 14:04:00.442390 560 log.go:181] (0x2ab00e0) (0x2ab0230) Create stream\nI1014 14:04:00.445229 560 log.go:181] (0x2ab00e0) (0x2ab0230) Stream added, broadcasting: 1\nI1014 14:04:00.455309 560 log.go:181] (0x2ab00e0) Reply frame received for 1\nI1014 14:04:00.455736 560 log.go:181] (0x2ab00e0) (0x2ab03f0) Create stream\nI1014 14:04:00.455795 560 log.go:181] (0x2ab00e0) (0x2ab03f0) Stream added, broadcasting: 3\nI1014 14:04:00.457092 560 log.go:181] (0x2ab00e0) Reply frame received for 3\nI1014 14:04:00.457297 560 log.go:181] (0x2ab00e0) (0x2ab05b0) Create stream\nI1014 14:04:00.457351 560 log.go:181] (0x2ab00e0) (0x2ab05b0) Stream added, broadcasting: 5\nI1014 14:04:00.458506 560 log.go:181] (0x2ab00e0) Reply frame received for 5\nI1014 14:04:00.509365 560 log.go:181] (0x2ab00e0) Data frame received for 5\nI1014 14:04:00.509720 560 log.go:181] (0x2ab00e0) Data frame received for 3\nI1014 14:04:00.509870 560 log.go:181] (0x2ab03f0) (3) Data frame handling\nI1014 14:04:00.510029 560 log.go:181] (0x2ab05b0) (5) Data frame handling\nI1014 14:04:00.510662 560 log.go:181] (0x2ab03f0) (3) Data frame sent\nI1014 14:04:00.511003 560 log.go:181] (0x2ab05b0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31804/\nI1014 14:04:00.515103 560 log.go:181] (0x2ab00e0) Data frame received for 3\nI1014 14:04:00.515259 560 log.go:181] (0x2ab03f0) (3) Data frame handling\nI1014 14:04:00.515431 560 log.go:181] (0x2ab03f0) (3) Data frame sent\nI1014 14:04:00.516244 560 log.go:181] (0x2ab00e0) Data frame received for 5\nI1014 14:04:00.516425 560 log.go:181] (0x2ab05b0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31804/\nI1014 14:04:00.516549 560 log.go:181] (0x2ab00e0) Data frame received for 3\nI1014 14:04:00.516726 560 log.go:181] (0x2ab03f0) (3) Data frame handling\nI1014 14:04:00.516822 560 log.go:181] (0x2ab05b0) (5) Data frame sent\nI1014 14:04:00.517076 560 log.go:181] (0x2ab03f0) (3) Data frame sent\nI1014 14:04:00.521384 560 log.go:181] (0x2ab00e0) Data frame received for 3\nI1014 14:04:00.521514 560 log.go:181] (0x2ab03f0) (3) Data frame handling\nI1014 14:04:00.521639 560 log.go:181] (0x2ab03f0) (3) Data frame sent\nI1014 14:04:00.521933 560 log.go:181] (0x2ab00e0) Data frame received for 5\nI1014 14:04:00.522039 560 log.go:181] (0x2ab05b0) (5) Data frame handling\nI1014 14:04:00.522172 560 log.go:181] (0x2ab05b0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31804/\nI1014 14:04:00.522281 560 log.go:181] (0x2ab00e0) Data frame received for 3\nI1014 14:04:00.522382 560 log.go:181] (0x2ab03f0) (3) Data frame handling\nI1014 14:04:00.522514 560 log.go:181] (0x2ab03f0) (3) Data frame sent\nI1014 14:04:00.526803 560 log.go:181] (0x2ab00e0) Data frame received for 3\nI1014 14:04:00.526940 560 log.go:181] (0x2ab03f0) (3) Data frame handling\nI1014 14:04:00.527068 560 log.go:181] (0x2ab03f0) (3) Data frame sent\nI1014 14:04:00.527730 560 log.go:181] (0x2ab00e0) Data frame received for 5\nI1014 14:04:00.527823 560 log.go:181] (0x2ab05b0) (5) Data frame handling\nI1014 14:04:00.527895 560 log.go:181] (0x2ab05b0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31804/\nI1014 14:04:00.527959 560 log.go:181] (0x2ab00e0) Data frame received for 3\nI1014 14:04:00.528015 560 log.go:181] (0x2ab03f0) (3) Data frame handling\nI1014 14:04:00.528092 560 log.go:181] (0x2ab03f0) (3) Data frame sent\nI1014 14:04:00.533604 560 log.go:181] (0x2ab00e0) Data frame received for 3\nI1014 14:04:00.533785 560 log.go:181] (0x2ab03f0) (3) Data frame handling\nI1014 14:04:00.533947 560 log.go:181] (0x2ab03f0) (3) Data frame sent\nI1014 14:04:00.534297 560 log.go:181] (0x2ab00e0) Data frame received for 5\nI1014 14:04:00.534486 560 log.go:181] (0x2ab05b0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31804/\nI1014 14:04:00.534635 560 log.go:181] (0x2ab00e0) Data frame received for 3\nI1014 14:04:00.534776 560 log.go:181] (0x2ab03f0) (3) Data frame handling\nI1014 14:04:00.534885 560 log.go:181] (0x2ab05b0) (5) Data frame sent\nI1014 14:04:00.535052 560 log.go:181] (0x2ab03f0) (3) Data frame sent\nI1014 14:04:00.541129 560 log.go:181] (0x2ab00e0) Data frame received for 3\nI1014 14:04:00.541262 560 log.go:181] (0x2ab03f0) (3) Data frame handling\nI1014 14:04:00.541436 560 log.go:181] (0x2ab03f0) (3) Data frame sent\nI1014 14:04:00.541803 560 log.go:181] (0x2ab00e0) Data frame received for 3\nI1014 14:04:00.541967 560 log.go:181] (0x2ab03f0) (3) Data frame handling\nI1014 14:04:00.542108 560 log.go:181] (0x2ab00e0) Data frame received for 5\nI1014 14:04:00.542257 560 log.go:181] (0x2ab05b0) (5) Data frame handling\nI1014 14:04:00.542354 560 log.go:181] (0x2ab03f0) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31804/\nI1014 14:04:00.542466 560 log.go:181] (0x2ab05b0) (5) Data frame sent\nI1014 14:04:00.547429 560 log.go:181] (0x2ab00e0) Data frame received for 3\nI1014 14:04:00.547506 560 log.go:181] (0x2ab03f0) (3) Data frame handling\nI1014 14:04:00.547615 560 log.go:181] (0x2ab03f0) (3) Data frame sent\nI1014 14:04:00.548326 560 log.go:181] (0x2ab00e0) Data frame received for 3\nI1014 14:04:00.548460 560 log.go:181] (0x2ab03f0) (3) Data frame handling\nI1014 14:04:00.548582 560 log.go:181] (0x2ab00e0) Data frame received for 5\nI1014 14:04:00.548734 560 log.go:181] (0x2ab05b0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31804/\nI1014 14:04:00.548988 560 log.go:181] (0x2ab03f0) (3) Data frame sent\nI1014 14:04:00.549093 560 log.go:181] (0x2ab05b0) (5) Data frame sent\nI1014 14:04:00.553250 560 log.go:181] (0x2ab00e0) Data frame received for 3\nI1014 14:04:00.553361 560 log.go:181] (0x2ab03f0) (3) Data frame handling\nI1014 14:04:00.553475 560 log.go:181] (0x2ab03f0) (3) Data frame sent\nI1014 14:04:00.554115 560 log.go:181] (0x2ab00e0) Data frame received for 5\nI1014 14:04:00.554251 560 log.go:181] (0x2ab05b0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31804/I1014 14:04:00.554359 560 log.go:181] (0x2ab00e0) Data frame received for 3\nI1014 14:04:00.554497 560 log.go:181] (0x2ab03f0) (3) Data frame handling\nI1014 14:04:00.554580 560 log.go:181] (0x2ab05b0) (5) Data frame sent\nI1014 14:04:00.554698 560 log.go:181] (0x2ab00e0) Data frame received for 5\nI1014 14:04:00.554782 560 log.go:181] (0x2ab05b0) (5) Data frame handling\n\nI1014 14:04:00.554886 560 log.go:181] (0x2ab03f0) (3) Data frame sent\nI1014 14:04:00.554982 560 log.go:181] (0x2ab05b0) (5) Data frame sent\nI1014 14:04:00.557812 560 log.go:181] (0x2ab00e0) Data frame received for 3\nI1014 14:04:00.557914 560 log.go:181] (0x2ab03f0) (3) Data frame handling\nI1014 14:04:00.558019 560 log.go:181] (0x2ab03f0) (3) Data frame sent\nI1014 14:04:00.558671 560 log.go:181] (0x2ab00e0) Data frame received for 5\nI1014 14:04:00.558816 560 log.go:181] (0x2ab00e0) Data frame received for 3\nI1014 14:04:00.559007 560 log.go:181] (0x2ab03f0) (3) Data frame handling\nI1014 14:04:00.559196 560 log.go:181] (0x2ab03f0) (3) Data frame sent\nI1014 14:04:00.559351 560 log.go:181] (0x2ab05b0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31804/\nI1014 14:04:00.559539 560 log.go:181] (0x2ab05b0) (5) Data frame sent\nI1014 14:04:00.565606 560 log.go:181] (0x2ab00e0) Data frame received for 3\nI1014 14:04:00.565701 560 log.go:181] (0x2ab03f0) (3) Data frame handling\nI1014 14:04:00.565803 560 log.go:181] (0x2ab03f0) (3) Data frame sent\nI1014 14:04:00.566372 560 log.go:181] (0x2ab00e0) Data frame received for 5\nI1014 14:04:00.566450 560 log.go:181] (0x2ab05b0) (5) Data frame handling\nI1014 14:04:00.566516 560 log.go:181] (0x2ab05b0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31804/\nI1014 14:04:00.566578 560 log.go:181] (0x2ab00e0) Data frame received for 3\nI1014 14:04:00.566641 560 log.go:181] (0x2ab03f0) (3) Data frame handling\nI1014 14:04:00.566720 560 log.go:181] (0x2ab03f0) (3) Data frame sent\nI1014 14:04:00.572489 560 log.go:181] (0x2ab00e0) Data frame received for 3\nI1014 14:04:00.572578 560 log.go:181] (0x2ab03f0) (3) Data frame handling\nI1014 14:04:00.572672 560 log.go:181] (0x2ab03f0) (3) Data frame sent\nI1014 14:04:00.573362 560 log.go:181] (0x2ab00e0) Data frame received for 5\nI1014 14:04:00.573463 560 log.go:181] (0x2ab05b0) (5) Data frame handling\nI1014 14:04:00.573530 560 log.go:181] (0x2ab05b0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31804/\nI1014 14:04:00.573607 560 log.go:181] (0x2ab00e0) Data frame received for 3\nI1014 14:04:00.573761 560 log.go:181] (0x2ab03f0) (3) Data frame handling\nI1014 14:04:00.573906 560 log.go:181] (0x2ab03f0) (3) Data frame sent\nI1014 14:04:00.578571 560 log.go:181] (0x2ab00e0) Data frame received for 3\nI1014 14:04:00.578719 560 log.go:181] (0x2ab03f0) (3) Data frame handling\nI1014 14:04:00.578909 560 log.go:181] (0x2ab03f0) (3) Data frame sent\nI1014 14:04:00.579247 560 log.go:181] (0x2ab00e0) Data frame received for 5\nI1014 14:04:00.579386 560 log.go:181] (0x2ab05b0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31804/\nI1014 14:04:00.579531 560 log.go:181] (0x2ab00e0) Data frame received for 3\nI1014 14:04:00.579735 560 log.go:181] (0x2ab03f0) (3) Data frame handling\nI1014 14:04:00.579897 560 log.go:181] (0x2ab05b0) (5) Data frame sent\nI1014 14:04:00.580057 560 log.go:181] (0x2ab03f0) (3) Data frame sent\nI1014 14:04:00.585341 560 log.go:181] (0x2ab00e0) Data frame received for 3\nI1014 14:04:00.585444 560 log.go:181] (0x2ab03f0) (3) Data frame handling\nI1014 14:04:00.585586 560 log.go:181] (0x2ab03f0) (3) Data frame sent\nI1014 14:04:00.586163 560 log.go:181] (0x2ab00e0) Data frame received for 3\nI1014 14:04:00.586339 560 log.go:181] (0x2ab03f0) (3) Data frame handling\nI1014 14:04:00.586518 560 log.go:181] (0x2ab03f0) (3) Data frame sent\nI1014 14:04:00.586792 560 log.go:181] (0x2ab00e0) Data frame received for 5\nI1014 14:04:00.586945 560 log.go:181] (0x2ab05b0) (5) Data frame handling\nI1014 14:04:00.587113 560 log.go:181] (0x2ab05b0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31804/\nI1014 14:04:00.592223 560 log.go:181] (0x2ab00e0) Data frame received for 3\nI1014 14:04:00.592386 560 log.go:181] (0x2ab03f0) (3) Data frame handling\nI1014 14:04:00.592575 560 log.go:181] (0x2ab03f0) (3) Data frame sent\nI1014 14:04:00.593052 560 log.go:181] (0x2ab00e0) Data frame received for 5\nI1014 14:04:00.593160 560 log.go:181] (0x2ab05b0) (5) Data frame handling\nI1014 14:04:00.593271 560 log.go:181] (0x2ab05b0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31804/\nI1014 14:04:00.593351 560 log.go:181] (0x2ab00e0) Data frame received for 3\nI1014 14:04:00.593429 560 log.go:181] (0x2ab03f0) (3) Data frame handling\nI1014 14:04:00.593525 560 log.go:181] (0x2ab03f0) (3) Data frame sent\nI1014 14:04:00.597401 560 log.go:181] (0x2ab00e0) Data frame received for 3\nI1014 14:04:00.597488 560 log.go:181] (0x2ab03f0) (3) Data frame handling\nI1014 14:04:00.597569 560 log.go:181] (0x2ab03f0) (3) Data frame sent\nI1014 14:04:00.598338 560 log.go:181] (0x2ab00e0) Data frame received for 5\nI1014 14:04:00.598558 560 log.go:181] (0x2ab05b0) (5) Data frame handling\nI1014 14:04:00.598757 560 log.go:181] (0x2ab05b0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31804/\nI1014 14:04:00.598924 560 log.go:181] (0x2ab00e0) Data frame received for 3\nI1014 14:04:00.599071 560 log.go:181] (0x2ab03f0) (3) Data frame handling\nI1014 14:04:00.599247 560 log.go:181] (0x2ab03f0) (3) Data frame sent\nI1014 14:04:00.605142 560 log.go:181] (0x2ab00e0) Data frame received for 3\nI1014 14:04:00.605262 560 log.go:181] (0x2ab03f0) (3) Data frame handling\nI1014 14:04:00.605352 560 log.go:181] (0x2ab03f0) (3) Data frame sent\nI1014 14:04:00.605775 560 log.go:181] (0x2ab00e0) Data frame received for 5\nI1014 14:04:00.605907 560 log.go:181] (0x2ab05b0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31804/\nI1014 14:04:00.606015 560 log.go:181] (0x2ab00e0) Data frame received for 3\nI1014 14:04:00.606142 560 log.go:181] (0x2ab03f0) (3) Data frame handling\nI1014 14:04:00.606261 560 log.go:181] (0x2ab05b0) (5) Data frame sent\nI1014 14:04:00.606410 560 log.go:181] (0x2ab03f0) (3) Data frame sent\nI1014 14:04:00.610531 560 log.go:181] (0x2ab00e0) Data frame received for 3\nI1014 14:04:00.610627 560 log.go:181] (0x2ab03f0) (3) Data frame handling\nI1014 14:04:00.610733 560 log.go:181] (0x2ab03f0) (3) Data frame sent\nI1014 14:04:00.611328 560 log.go:181] (0x2ab00e0) Data frame received for 3\nI1014 14:04:00.611425 560 log.go:181] (0x2ab03f0) (3) Data frame handling\nI1014 14:04:00.611802 560 log.go:181] (0x2ab00e0) Data frame received for 5\nI1014 14:04:00.611947 560 log.go:181] (0x2ab05b0) (5) Data frame handling\nI1014 14:04:00.614204 560 log.go:181] (0x2ab00e0) Data frame received for 1\nI1014 14:04:00.614364 560 log.go:181] (0x2ab0230) (1) Data frame handling\nI1014 14:04:00.614472 560 log.go:181] (0x2ab0230) (1) Data frame sent\nI1014 14:04:00.615103 560 log.go:181] (0x2ab00e0) (0x2ab0230) Stream removed, broadcasting: 1\nI1014 14:04:00.618222 560 log.go:181] (0x2ab00e0) Go away received\nI1014 14:04:00.620467 560 log.go:181] (0x2ab00e0) (0x2ab0230) Stream removed, broadcasting: 1\nI1014 14:04:00.621008 560 log.go:181] (0x2ab00e0) (0x2ab03f0) Stream removed, broadcasting: 3\nI1014 14:04:00.621204 560 log.go:181] (0x2ab00e0) (0x2ab05b0) Stream removed, broadcasting: 5\n" Oct 14 14:04:00.635: INFO: stdout: "\naffinity-nodeport-l64j5\naffinity-nodeport-l64j5\naffinity-nodeport-l64j5\naffinity-nodeport-l64j5\naffinity-nodeport-l64j5\naffinity-nodeport-l64j5\naffinity-nodeport-l64j5\naffinity-nodeport-l64j5\naffinity-nodeport-l64j5\naffinity-nodeport-l64j5\naffinity-nodeport-l64j5\naffinity-nodeport-l64j5\naffinity-nodeport-l64j5\naffinity-nodeport-l64j5\naffinity-nodeport-l64j5\naffinity-nodeport-l64j5" Oct 14 14:04:00.635: INFO: Received response from host: affinity-nodeport-l64j5 Oct 14 14:04:00.635: INFO: Received response from host: affinity-nodeport-l64j5 Oct 14 14:04:00.635: INFO: Received response from host: affinity-nodeport-l64j5 Oct 14 14:04:00.635: INFO: Received response from host: affinity-nodeport-l64j5 Oct 14 14:04:00.635: INFO: Received response from host: affinity-nodeport-l64j5 Oct 14 14:04:00.635: INFO: Received response from host: affinity-nodeport-l64j5 Oct 14 14:04:00.635: INFO: Received response from host: affinity-nodeport-l64j5 Oct 14 14:04:00.635: INFO: Received response from host: affinity-nodeport-l64j5 Oct 14 14:04:00.635: INFO: Received response from host: affinity-nodeport-l64j5 Oct 14 14:04:00.636: INFO: Received response from host: affinity-nodeport-l64j5 Oct 14 14:04:00.636: INFO: Received response from host: affinity-nodeport-l64j5 Oct 14 14:04:00.636: INFO: Received response from host: affinity-nodeport-l64j5 Oct 14 14:04:00.636: INFO: Received response from host: affinity-nodeport-l64j5 Oct 14 14:04:00.636: INFO: Received response from host: affinity-nodeport-l64j5 Oct 14 14:04:00.636: INFO: Received response from host: affinity-nodeport-l64j5 Oct 14 14:04:00.636: INFO: Received response from host: affinity-nodeport-l64j5 Oct 14 14:04:00.636: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-532, will wait for the garbage collector to delete the pods Oct 14 14:04:00.817: INFO: Deleting ReplicationController affinity-nodeport took: 48.228124ms Oct 14 14:04:01.318: INFO: Terminating ReplicationController affinity-nodeport pods took: 500.876278ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:04:15.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-532" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:34.371 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":303,"completed":75,"skipped":1055,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:04:15.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium Oct 14 14:04:15.874: INFO: Waiting up to 5m0s for pod "pod-691c7e23-44cb-4bb8-aa87-4ba7c6f50c5f" in namespace "emptydir-9572" to be "Succeeded or Failed" Oct 14 14:04:15.888: INFO: Pod "pod-691c7e23-44cb-4bb8-aa87-4ba7c6f50c5f": Phase="Pending", Reason="", readiness=false. Elapsed: 13.760328ms Oct 14 14:04:17.895: INFO: Pod "pod-691c7e23-44cb-4bb8-aa87-4ba7c6f50c5f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021020214s Oct 14 14:04:19.902: INFO: Pod "pod-691c7e23-44cb-4bb8-aa87-4ba7c6f50c5f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027654506s STEP: Saw pod success Oct 14 14:04:19.902: INFO: Pod "pod-691c7e23-44cb-4bb8-aa87-4ba7c6f50c5f" satisfied condition "Succeeded or Failed" Oct 14 14:04:19.908: INFO: Trying to get logs from node latest-worker pod pod-691c7e23-44cb-4bb8-aa87-4ba7c6f50c5f container test-container: STEP: delete the pod Oct 14 14:04:19.944: INFO: Waiting for pod pod-691c7e23-44cb-4bb8-aa87-4ba7c6f50c5f to disappear Oct 14 14:04:19.951: INFO: Pod pod-691c7e23-44cb-4bb8-aa87-4ba7c6f50c5f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:04:19.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9572" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":76,"skipped":1073,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:04:19.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 14 14:04:20.063: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a4c06f1e-8e52-4530-9327-067fa80afc8e" in namespace "projected-3228" to be "Succeeded or Failed" Oct 14 14:04:20.076: INFO: Pod "downwardapi-volume-a4c06f1e-8e52-4530-9327-067fa80afc8e": Phase="Pending", Reason="", readiness=false. Elapsed: 13.314006ms Oct 14 14:04:22.085: INFO: Pod "downwardapi-volume-a4c06f1e-8e52-4530-9327-067fa80afc8e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021770677s Oct 14 14:04:24.094: INFO: Pod "downwardapi-volume-a4c06f1e-8e52-4530-9327-067fa80afc8e": Phase="Running", Reason="", readiness=true. Elapsed: 4.030442849s Oct 14 14:04:26.133: INFO: Pod "downwardapi-volume-a4c06f1e-8e52-4530-9327-067fa80afc8e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.070140365s STEP: Saw pod success Oct 14 14:04:26.134: INFO: Pod "downwardapi-volume-a4c06f1e-8e52-4530-9327-067fa80afc8e" satisfied condition "Succeeded or Failed" Oct 14 14:04:26.139: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-a4c06f1e-8e52-4530-9327-067fa80afc8e container client-container: STEP: delete the pod Oct 14 14:04:26.161: INFO: Waiting for pod downwardapi-volume-a4c06f1e-8e52-4530-9327-067fa80afc8e to disappear Oct 14 14:04:26.180: INFO: Pod downwardapi-volume-a4c06f1e-8e52-4530-9327-067fa80afc8e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:04:26.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3228" for this suite. • [SLOW TEST:6.221 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":303,"completed":77,"skipped":1087,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:04:26.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 14 14:04:26.278: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 14 14:04:26.297: INFO: Waiting for terminating namespaces to be deleted... Oct 14 14:04:26.310: INFO: Logging pods the apiserver thinks is on node latest-worker before test Oct 14 14:04:26.321: INFO: kindnet-jwscz from kube-system started at 2020-10-10 08:58:57 +0000 UTC (1 container statuses recorded) Oct 14 14:04:26.321: INFO: Container kindnet-cni ready: true, restart count 0 Oct 14 14:04:26.321: INFO: kube-proxy-cg6dw from kube-system started at 2020-10-10 08:58:56 +0000 UTC (1 container statuses recorded) Oct 14 14:04:26.322: INFO: Container kube-proxy ready: true, restart count 0 Oct 14 14:04:26.322: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Oct 14 14:04:26.332: INFO: coredns-f9fd979d6-l8q79 from kube-system started at 2020-10-10 08:59:26 +0000 UTC (1 container statuses recorded) Oct 14 14:04:26.332: INFO: Container coredns ready: true, restart count 0 Oct 14 14:04:26.332: INFO: coredns-f9fd979d6-rhzs8 from kube-system started at 2020-10-10 08:59:16 +0000 UTC (1 container statuses recorded) Oct 14 14:04:26.332: INFO: Container coredns ready: true, restart count 0 Oct 14 14:04:26.333: INFO: kindnet-g7vp5 from kube-system started at 2020-10-10 08:58:57 +0000 UTC (1 container statuses recorded) Oct 14 14:04:26.333: INFO: Container kindnet-cni ready: true, restart count 0 Oct 14 14:04:26.333: INFO: kube-proxy-bmxmj from kube-system started at 2020-10-10 08:58:56 +0000 UTC (1 container statuses recorded) Oct 14 14:04:26.333: INFO: Container kube-proxy ready: true, restart count 0 Oct 14 14:04:26.333: INFO: local-path-provisioner-78776bfc44-6tlk5 from local-path-storage started at 2020-10-10 08:59:16 +0000 UTC (1 container statuses recorded) Oct 14 14:04:26.333: INFO: Container local-path-provisioner ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 Oct 14 14:04:26.462: INFO: Pod coredns-f9fd979d6-l8q79 requesting resource cpu=100m on Node latest-worker2 Oct 14 14:04:26.463: INFO: Pod coredns-f9fd979d6-rhzs8 requesting resource cpu=100m on Node latest-worker2 Oct 14 14:04:26.463: INFO: Pod kindnet-g7vp5 requesting resource cpu=100m on Node latest-worker2 Oct 14 14:04:26.463: INFO: Pod kindnet-jwscz requesting resource cpu=100m on Node latest-worker Oct 14 14:04:26.463: INFO: Pod kube-proxy-bmxmj requesting resource cpu=0m on Node latest-worker2 Oct 14 14:04:26.463: INFO: Pod kube-proxy-cg6dw requesting resource cpu=0m on Node latest-worker Oct 14 14:04:26.463: INFO: Pod local-path-provisioner-78776bfc44-6tlk5 requesting resource cpu=0m on Node latest-worker2 STEP: Starting Pods to consume most of the cluster CPU. Oct 14 14:04:26.463: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker Oct 14 14:04:26.477: INFO: Creating a pod which consumes cpu=10990m on Node latest-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-05172091-d97b-456c-80d7-d6a7aa66ec44.163de0d9f9b64ecb], Reason = [Created], Message = [Created container filler-pod-05172091-d97b-456c-80d7-d6a7aa66ec44] STEP: Considering event: Type = [Normal], Name = [filler-pod-05172091-d97b-456c-80d7-d6a7aa66ec44.163de0d956915673], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7218/filler-pod-05172091-d97b-456c-80d7-d6a7aa66ec44 to latest-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-3b57c6ef-47d7-4a14-a22a-07b29b945d35.163de0d95a76923c], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7218/filler-pod-3b57c6ef-47d7-4a14-a22a-07b29b945d35 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-05172091-d97b-456c-80d7-d6a7aa66ec44.163de0da0d647140], Reason = [Started], Message = [Started container filler-pod-05172091-d97b-456c-80d7-d6a7aa66ec44] STEP: Considering event: Type = [Normal], Name = [filler-pod-3b57c6ef-47d7-4a14-a22a-07b29b945d35.163de0da11fd71d3], Reason = [Created], Message = [Created container filler-pod-3b57c6ef-47d7-4a14-a22a-07b29b945d35] STEP: Considering event: Type = [Normal], Name = [filler-pod-05172091-d97b-456c-80d7-d6a7aa66ec44.163de0d9acd4b9af], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-3b57c6ef-47d7-4a14-a22a-07b29b945d35.163de0da22749eee], Reason = [Started], Message = [Started container filler-pod-3b57c6ef-47d7-4a14-a22a-07b29b945d35] STEP: Considering event: Type = [Normal], Name = [filler-pod-3b57c6ef-47d7-4a14-a22a-07b29b945d35.163de0d9c0297242], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Warning], Name = [additional-pod.163de0da4d2d8b98], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:04:31.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7218" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:5.562 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":303,"completed":78,"skipped":1103,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:04:31.761: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl logs /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1415 STEP: creating an pod Oct 14 14:04:31.854: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.20 --namespace=kubectl-2140 --restart=Never -- logs-generator --log-lines-total 100 --run-duration 20s' Oct 14 14:04:33.082: INFO: stderr: "" Oct 14 14:04:33.082: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Waiting for log generator to start. Oct 14 14:04:33.083: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Oct 14 14:04:33.084: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-2140" to be "running and ready, or succeeded" Oct 14 14:04:33.090: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 5.687831ms Oct 14 14:04:35.098: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014474562s Oct 14 14:04:37.158: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.073735069s Oct 14 14:04:37.158: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Oct 14 14:04:37.159: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Oct 14 14:04:37.160: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2140' Oct 14 14:04:38.505: INFO: stderr: "" Oct 14 14:04:38.505: INFO: stdout: "I1014 14:04:35.587194 1 logs_generator.go:76] 0 POST /api/v1/namespaces/default/pods/6k5 285\nI1014 14:04:35.787369 1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/cl2 439\nI1014 14:04:35.987367 1 logs_generator.go:76] 2 POST /api/v1/namespaces/kube-system/pods/g42 272\nI1014 14:04:36.187337 1 logs_generator.go:76] 3 POST /api/v1/namespaces/kube-system/pods/mn7 314\nI1014 14:04:36.387342 1 logs_generator.go:76] 4 POST /api/v1/namespaces/ns/pods/484 417\nI1014 14:04:36.587369 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/ns/pods/zdb 332\nI1014 14:04:36.787327 1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/ph2m 471\nI1014 14:04:36.987279 1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/zg48 349\nI1014 14:04:37.187323 1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/dzv 326\nI1014 14:04:37.387236 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/5pb7 238\nI1014 14:04:37.587315 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/default/pods/dt7n 404\nI1014 14:04:37.787303 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/kube-system/pods/4zfx 503\nI1014 14:04:37.987321 1 logs_generator.go:76] 12 GET /api/v1/namespaces/default/pods/68xs 452\nI1014 14:04:38.187227 1 logs_generator.go:76] 13 GET /api/v1/namespaces/kube-system/pods/m22f 366\nI1014 14:04:38.387328 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/default/pods/fckt 500\n" STEP: limiting log lines Oct 14 14:04:38.506: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2140 --tail=1' Oct 14 14:04:39.755: INFO: stderr: "" Oct 14 14:04:39.755: INFO: stdout: "I1014 14:04:39.587340 1 logs_generator.go:76] 20 POST /api/v1/namespaces/default/pods/h5m 549\n" Oct 14 14:04:39.755: INFO: got output "I1014 14:04:39.587340 1 logs_generator.go:76] 20 POST /api/v1/namespaces/default/pods/h5m 549\n" STEP: limiting log bytes Oct 14 14:04:39.756: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2140 --limit-bytes=1' Oct 14 14:04:41.141: INFO: stderr: "" Oct 14 14:04:41.141: INFO: stdout: "I" Oct 14 14:04:41.141: INFO: got output "I" STEP: exposing timestamps Oct 14 14:04:41.142: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2140 --tail=1 --timestamps' Oct 14 14:04:42.535: INFO: stderr: "" Oct 14 14:04:42.535: INFO: stdout: "2020-10-14T14:04:42.387469297Z I1014 14:04:42.387315 1 logs_generator.go:76] 34 GET /api/v1/namespaces/kube-system/pods/v48 348\n" Oct 14 14:04:42.536: INFO: got output "2020-10-14T14:04:42.387469297Z I1014 14:04:42.387315 1 logs_generator.go:76] 34 GET /api/v1/namespaces/kube-system/pods/v48 348\n" STEP: restricting to a time range Oct 14 14:04:45.038: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2140 --since=1s' Oct 14 14:04:46.294: INFO: stderr: "" Oct 14 14:04:46.294: INFO: stdout: "I1014 14:04:45.387336 1 logs_generator.go:76] 49 PUT /api/v1/namespaces/default/pods/fvw 512\nI1014 14:04:45.587302 1 logs_generator.go:76] 50 GET /api/v1/namespaces/default/pods/zjb7 236\nI1014 14:04:45.787297 1 logs_generator.go:76] 51 POST /api/v1/namespaces/kube-system/pods/2rh 335\nI1014 14:04:45.987250 1 logs_generator.go:76] 52 POST /api/v1/namespaces/default/pods/8w7 534\nI1014 14:04:46.187285 1 logs_generator.go:76] 53 PUT /api/v1/namespaces/ns/pods/qwfn 589\n" Oct 14 14:04:46.295: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2140 --since=24h' Oct 14 14:04:47.618: INFO: stderr: "" Oct 14 14:04:47.618: INFO: stdout: "I1014 14:04:35.587194 1 logs_generator.go:76] 0 POST /api/v1/namespaces/default/pods/6k5 285\nI1014 14:04:35.787369 1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/cl2 439\nI1014 14:04:35.987367 1 logs_generator.go:76] 2 POST /api/v1/namespaces/kube-system/pods/g42 272\nI1014 14:04:36.187337 1 logs_generator.go:76] 3 POST /api/v1/namespaces/kube-system/pods/mn7 314\nI1014 14:04:36.387342 1 logs_generator.go:76] 4 POST /api/v1/namespaces/ns/pods/484 417\nI1014 14:04:36.587369 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/ns/pods/zdb 332\nI1014 14:04:36.787327 1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/ph2m 471\nI1014 14:04:36.987279 1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/zg48 349\nI1014 14:04:37.187323 1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/dzv 326\nI1014 14:04:37.387236 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/5pb7 238\nI1014 14:04:37.587315 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/default/pods/dt7n 404\nI1014 14:04:37.787303 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/kube-system/pods/4zfx 503\nI1014 14:04:37.987321 1 logs_generator.go:76] 12 GET /api/v1/namespaces/default/pods/68xs 452\nI1014 14:04:38.187227 1 logs_generator.go:76] 13 GET /api/v1/namespaces/kube-system/pods/m22f 366\nI1014 14:04:38.387328 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/default/pods/fckt 500\nI1014 14:04:38.587298 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/default/pods/kfr 503\nI1014 14:04:38.787335 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/default/pods/5r8 489\nI1014 14:04:38.987352 1 logs_generator.go:76] 17 POST /api/v1/namespaces/default/pods/hvlg 217\nI1014 14:04:39.187325 1 logs_generator.go:76] 18 POST /api/v1/namespaces/kube-system/pods/zbc 383\nI1014 14:04:39.387323 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/default/pods/nrr 438\nI1014 14:04:39.587340 1 logs_generator.go:76] 20 POST /api/v1/namespaces/default/pods/h5m 549\nI1014 14:04:39.787316 1 logs_generator.go:76] 21 PUT /api/v1/namespaces/ns/pods/2dq 533\nI1014 14:04:39.987329 1 logs_generator.go:76] 22 GET /api/v1/namespaces/default/pods/749s 567\nI1014 14:04:40.187349 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/default/pods/2rld 489\nI1014 14:04:40.387311 1 logs_generator.go:76] 24 GET /api/v1/namespaces/ns/pods/bz2 223\nI1014 14:04:40.587299 1 logs_generator.go:76] 25 POST /api/v1/namespaces/default/pods/8skl 239\nI1014 14:04:40.787361 1 logs_generator.go:76] 26 GET /api/v1/namespaces/kube-system/pods/dkh 456\nI1014 14:04:40.987317 1 logs_generator.go:76] 27 POST /api/v1/namespaces/ns/pods/f6z 381\nI1014 14:04:41.187389 1 logs_generator.go:76] 28 PUT /api/v1/namespaces/kube-system/pods/jd5x 576\nI1014 14:04:41.387325 1 logs_generator.go:76] 29 POST /api/v1/namespaces/default/pods/nm9 446\nI1014 14:04:41.587301 1 logs_generator.go:76] 30 POST /api/v1/namespaces/kube-system/pods/xcs 466\nI1014 14:04:41.787295 1 logs_generator.go:76] 31 POST /api/v1/namespaces/default/pods/gmzj 211\nI1014 14:04:41.987301 1 logs_generator.go:76] 32 GET /api/v1/namespaces/default/pods/xwq8 546\nI1014 14:04:42.187333 1 logs_generator.go:76] 33 PUT /api/v1/namespaces/default/pods/jwh 451\nI1014 14:04:42.387315 1 logs_generator.go:76] 34 GET /api/v1/namespaces/kube-system/pods/v48 348\nI1014 14:04:42.587352 1 logs_generator.go:76] 35 GET /api/v1/namespaces/default/pods/hhn 366\nI1014 14:04:42.787354 1 logs_generator.go:76] 36 PUT /api/v1/namespaces/kube-system/pods/zt2 480\nI1014 14:04:42.987347 1 logs_generator.go:76] 37 POST /api/v1/namespaces/default/pods/d2cc 506\nI1014 14:04:43.187337 1 logs_generator.go:76] 38 PUT /api/v1/namespaces/kube-system/pods/j6x 320\nI1014 14:04:43.387304 1 logs_generator.go:76] 39 GET /api/v1/namespaces/kube-system/pods/xks 345\nI1014 14:04:43.587343 1 logs_generator.go:76] 40 POST /api/v1/namespaces/ns/pods/c4l 405\nI1014 14:04:43.787353 1 logs_generator.go:76] 41 POST /api/v1/namespaces/kube-system/pods/v9vr 446\nI1014 14:04:43.987336 1 logs_generator.go:76] 42 GET /api/v1/namespaces/kube-system/pods/rmg 243\nI1014 14:04:44.187349 1 logs_generator.go:76] 43 GET /api/v1/namespaces/ns/pods/sch 270\nI1014 14:04:44.387345 1 logs_generator.go:76] 44 GET /api/v1/namespaces/ns/pods/d7w 408\nI1014 14:04:44.587359 1 logs_generator.go:76] 45 POST /api/v1/namespaces/default/pods/ffb 547\nI1014 14:04:44.787378 1 logs_generator.go:76] 46 PUT /api/v1/namespaces/ns/pods/t57 452\nI1014 14:04:44.987329 1 logs_generator.go:76] 47 GET /api/v1/namespaces/kube-system/pods/pqfl 254\nI1014 14:04:45.187346 1 logs_generator.go:76] 48 PUT /api/v1/namespaces/ns/pods/9dpv 568\nI1014 14:04:45.387336 1 logs_generator.go:76] 49 PUT /api/v1/namespaces/default/pods/fvw 512\nI1014 14:04:45.587302 1 logs_generator.go:76] 50 GET /api/v1/namespaces/default/pods/zjb7 236\nI1014 14:04:45.787297 1 logs_generator.go:76] 51 POST /api/v1/namespaces/kube-system/pods/2rh 335\nI1014 14:04:45.987250 1 logs_generator.go:76] 52 POST /api/v1/namespaces/default/pods/8w7 534\nI1014 14:04:46.187285 1 logs_generator.go:76] 53 PUT /api/v1/namespaces/ns/pods/qwfn 589\nI1014 14:04:46.387300 1 logs_generator.go:76] 54 POST /api/v1/namespaces/kube-system/pods/6h4 296\nI1014 14:04:46.587320 1 logs_generator.go:76] 55 PUT /api/v1/namespaces/ns/pods/297n 418\nI1014 14:04:46.787433 1 logs_generator.go:76] 56 GET /api/v1/namespaces/kube-system/pods/xtqk 391\nI1014 14:04:46.987315 1 logs_generator.go:76] 57 GET /api/v1/namespaces/default/pods/68m 215\nI1014 14:04:47.187351 1 logs_generator.go:76] 58 PUT /api/v1/namespaces/ns/pods/psng 383\nI1014 14:04:47.387322 1 logs_generator.go:76] 59 PUT /api/v1/namespaces/default/pods/jzv 298\nI1014 14:04:47.587370 1 logs_generator.go:76] 60 PUT /api/v1/namespaces/ns/pods/xn9 441\n" [AfterEach] Kubectl logs /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1421 Oct 14 14:04:47.621: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-2140' Oct 14 14:04:55.683: INFO: stderr: "" Oct 14 14:04:55.683: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:04:55.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2140" for this suite. • [SLOW TEST:23.943 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1411 should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":303,"completed":79,"skipped":1108,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Service endpoints latency /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:04:55.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 14:04:55.798: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-2746 I1014 14:04:55.886819 11 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-2746, replica count: 1 I1014 14:04:56.938924 11 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1014 14:04:57.939985 11 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1014 14:04:58.940735 11 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1014 14:04:59.941745 11 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 14 14:05:00.771: INFO: Created: latency-svc-v6vkg Oct 14 14:05:00.842: INFO: Got endpoints: latency-svc-v6vkg [797.201958ms] Oct 14 14:05:00.964: INFO: Created: latency-svc-rdnck Oct 14 14:05:00.979: INFO: Got endpoints: latency-svc-rdnck [137.208647ms] Oct 14 14:05:01.062: INFO: Created: latency-svc-7vbb8 Oct 14 14:05:01.066: INFO: Got endpoints: latency-svc-7vbb8 [223.457997ms] Oct 14 14:05:01.107: INFO: Created: latency-svc-kftcx Oct 14 14:05:01.125: INFO: Got endpoints: latency-svc-kftcx [282.028138ms] Oct 14 14:05:01.136: INFO: Created: latency-svc-9kl77 Oct 14 14:05:01.146: INFO: Got endpoints: latency-svc-9kl77 [302.742774ms] Oct 14 14:05:01.205: INFO: Created: latency-svc-8tv6b Oct 14 14:05:01.210: INFO: Got endpoints: latency-svc-8tv6b [366.939912ms] Oct 14 14:05:01.257: INFO: Created: latency-svc-t8d5k Oct 14 14:05:01.282: INFO: Got endpoints: latency-svc-t8d5k [439.667863ms] Oct 14 14:05:01.344: INFO: Created: latency-svc-jlt6r Oct 14 14:05:01.358: INFO: Got endpoints: latency-svc-jlt6r [514.077122ms] Oct 14 14:05:01.390: INFO: Created: latency-svc-4m7wj Oct 14 14:05:01.407: INFO: Got endpoints: latency-svc-4m7wj [563.748017ms] Oct 14 14:05:01.430: INFO: Created: latency-svc-w8jtg Oct 14 14:05:01.436: INFO: Got endpoints: latency-svc-w8jtg [592.716514ms] Oct 14 14:05:01.516: INFO: Created: latency-svc-frsxz Oct 14 14:05:01.526: INFO: Got endpoints: latency-svc-frsxz [683.06034ms] Oct 14 14:05:01.545: INFO: Created: latency-svc-xmj9x Oct 14 14:05:01.558: INFO: Got endpoints: latency-svc-xmj9x [715.699164ms] Oct 14 14:05:01.643: INFO: Created: latency-svc-qfnbh Oct 14 14:05:01.647: INFO: Got endpoints: latency-svc-qfnbh [804.759722ms] Oct 14 14:05:01.683: INFO: Created: latency-svc-b52x9 Oct 14 14:05:01.706: INFO: Got endpoints: latency-svc-b52x9 [863.980706ms] Oct 14 14:05:01.787: INFO: Created: latency-svc-kbjnz Oct 14 14:05:01.804: INFO: Got endpoints: latency-svc-kbjnz [957.167915ms] Oct 14 14:05:01.827: INFO: Created: latency-svc-56kf4 Oct 14 14:05:01.841: INFO: Got endpoints: latency-svc-56kf4 [994.721445ms] Oct 14 14:05:01.934: INFO: Created: latency-svc-6t8j2 Oct 14 14:05:01.949: INFO: Got endpoints: latency-svc-6t8j2 [969.088898ms] Oct 14 14:05:01.971: INFO: Created: latency-svc-v5nkq Oct 14 14:05:01.979: INFO: Got endpoints: latency-svc-v5nkq [913.286397ms] Oct 14 14:05:02.000: INFO: Created: latency-svc-8gh8n Oct 14 14:05:02.121: INFO: Got endpoints: latency-svc-8gh8n [995.320263ms] Oct 14 14:05:02.123: INFO: Created: latency-svc-7xkv6 Oct 14 14:05:02.129: INFO: Got endpoints: latency-svc-7xkv6 [982.160892ms] Oct 14 14:05:02.168: INFO: Created: latency-svc-98772 Oct 14 14:05:02.184: INFO: Got endpoints: latency-svc-98772 [974.184776ms] Oct 14 14:05:02.211: INFO: Created: latency-svc-qhrx8 Oct 14 14:05:02.291: INFO: Got endpoints: latency-svc-qhrx8 [1.008403357s] Oct 14 14:05:02.320: INFO: Created: latency-svc-8csx8 Oct 14 14:05:02.341: INFO: Got endpoints: latency-svc-8csx8 [983.684871ms] Oct 14 14:05:02.361: INFO: Created: latency-svc-mf97b Oct 14 14:05:02.371: INFO: Got endpoints: latency-svc-mf97b [963.714758ms] Oct 14 14:05:02.452: INFO: Created: latency-svc-gmjgz Oct 14 14:05:02.475: INFO: Created: latency-svc-6bwl2 Oct 14 14:05:02.477: INFO: Got endpoints: latency-svc-gmjgz [1.040904966s] Oct 14 14:05:02.512: INFO: Got endpoints: latency-svc-6bwl2 [985.438623ms] Oct 14 14:05:02.548: INFO: Created: latency-svc-lgxzs Oct 14 14:05:02.590: INFO: Got endpoints: latency-svc-lgxzs [1.031580186s] Oct 14 14:05:02.600: INFO: Created: latency-svc-mkllf Oct 14 14:05:02.612: INFO: Got endpoints: latency-svc-mkllf [964.431021ms] Oct 14 14:05:02.636: INFO: Created: latency-svc-8g9tm Oct 14 14:05:02.649: INFO: Got endpoints: latency-svc-8g9tm [942.341508ms] Oct 14 14:05:02.666: INFO: Created: latency-svc-655rl Oct 14 14:05:02.679: INFO: Got endpoints: latency-svc-655rl [67.338112ms] Oct 14 14:05:02.721: INFO: Created: latency-svc-crllq Oct 14 14:05:02.745: INFO: Got endpoints: latency-svc-crllq [941.291948ms] Oct 14 14:05:02.745: INFO: Created: latency-svc-mljk5 Oct 14 14:05:02.768: INFO: Got endpoints: latency-svc-mljk5 [926.766271ms] Oct 14 14:05:02.880: INFO: Created: latency-svc-lthz2 Oct 14 14:05:02.889: INFO: Got endpoints: latency-svc-lthz2 [939.947239ms] Oct 14 14:05:02.911: INFO: Created: latency-svc-tj2gd Oct 14 14:05:02.932: INFO: Got endpoints: latency-svc-tj2gd [952.973859ms] Oct 14 14:05:03.057: INFO: Created: latency-svc-49lqs Oct 14 14:05:03.081: INFO: Got endpoints: latency-svc-49lqs [959.669669ms] Oct 14 14:05:03.124: INFO: Created: latency-svc-snch9 Oct 14 14:05:03.206: INFO: Got endpoints: latency-svc-snch9 [1.077186566s] Oct 14 14:05:03.208: INFO: Created: latency-svc-phsqc Oct 14 14:05:03.214: INFO: Got endpoints: latency-svc-phsqc [1.029213775s] Oct 14 14:05:03.237: INFO: Created: latency-svc-cbrg8 Oct 14 14:05:03.250: INFO: Got endpoints: latency-svc-cbrg8 [958.795163ms] Oct 14 14:05:03.268: INFO: Created: latency-svc-9tkkb Oct 14 14:05:03.280: INFO: Got endpoints: latency-svc-9tkkb [938.539869ms] Oct 14 14:05:03.297: INFO: Created: latency-svc-dc62w Oct 14 14:05:03.355: INFO: Got endpoints: latency-svc-dc62w [983.700382ms] Oct 14 14:05:03.362: INFO: Created: latency-svc-hvw9t Oct 14 14:05:03.384: INFO: Got endpoints: latency-svc-hvw9t [907.409156ms] Oct 14 14:05:03.404: INFO: Created: latency-svc-xrsgk Oct 14 14:05:03.414: INFO: Got endpoints: latency-svc-xrsgk [901.829542ms] Oct 14 14:05:03.493: INFO: Created: latency-svc-s289h Oct 14 14:05:03.497: INFO: Got endpoints: latency-svc-s289h [907.257169ms] Oct 14 14:05:03.524: INFO: Created: latency-svc-bx2d7 Oct 14 14:05:03.535: INFO: Got endpoints: latency-svc-bx2d7 [885.795652ms] Oct 14 14:05:03.578: INFO: Created: latency-svc-sb4zv Oct 14 14:05:03.588: INFO: Got endpoints: latency-svc-sb4zv [908.25424ms] Oct 14 14:05:03.676: INFO: Created: latency-svc-9cpgp Oct 14 14:05:03.691: INFO: Got endpoints: latency-svc-9cpgp [945.971346ms] Oct 14 14:05:03.717: INFO: Created: latency-svc-lg2hq Oct 14 14:05:03.728: INFO: Got endpoints: latency-svc-lg2hq [959.845715ms] Oct 14 14:05:03.834: INFO: Created: latency-svc-ht24l Oct 14 14:05:03.843: INFO: Got endpoints: latency-svc-ht24l [953.410983ms] Oct 14 14:05:03.891: INFO: Created: latency-svc-jfnlj Oct 14 14:05:03.900: INFO: Got endpoints: latency-svc-jfnlj [966.810489ms] Oct 14 14:05:03.919: INFO: Created: latency-svc-82752 Oct 14 14:05:03.931: INFO: Got endpoints: latency-svc-82752 [849.832662ms] Oct 14 14:05:03.998: INFO: Created: latency-svc-5c9dd Oct 14 14:05:04.016: INFO: Got endpoints: latency-svc-5c9dd [809.839834ms] Oct 14 14:05:04.034: INFO: Created: latency-svc-2ss9q Oct 14 14:05:04.045: INFO: Got endpoints: latency-svc-2ss9q [831.553293ms] Oct 14 14:05:04.127: INFO: Created: latency-svc-qw4fd Oct 14 14:05:04.136: INFO: Got endpoints: latency-svc-qw4fd [886.334872ms] Oct 14 14:05:04.153: INFO: Created: latency-svc-69dk8 Oct 14 14:05:04.179: INFO: Got endpoints: latency-svc-69dk8 [898.759659ms] Oct 14 14:05:04.206: INFO: Created: latency-svc-7m97w Oct 14 14:05:04.215: INFO: Got endpoints: latency-svc-7m97w [859.483213ms] Oct 14 14:05:04.285: INFO: Created: latency-svc-n9mlz Oct 14 14:05:04.289: INFO: Got endpoints: latency-svc-n9mlz [904.619803ms] Oct 14 14:05:04.371: INFO: Created: latency-svc-v669p Oct 14 14:05:04.429: INFO: Got endpoints: latency-svc-v669p [1.014531307s] Oct 14 14:05:04.430: INFO: Created: latency-svc-gv494 Oct 14 14:05:04.436: INFO: Got endpoints: latency-svc-gv494 [938.762966ms] Oct 14 14:05:04.455: INFO: Created: latency-svc-qfhrh Oct 14 14:05:04.467: INFO: Got endpoints: latency-svc-qfhrh [931.727183ms] Oct 14 14:05:04.485: INFO: Created: latency-svc-5cqlq Oct 14 14:05:04.498: INFO: Got endpoints: latency-svc-5cqlq [909.706675ms] Oct 14 14:05:04.527: INFO: Created: latency-svc-24nt4 Oct 14 14:05:04.594: INFO: Got endpoints: latency-svc-24nt4 [902.802479ms] Oct 14 14:05:04.610: INFO: Created: latency-svc-79vbs Oct 14 14:05:04.636: INFO: Got endpoints: latency-svc-79vbs [906.92877ms] Oct 14 14:05:04.660: INFO: Created: latency-svc-jxxl2 Oct 14 14:05:04.673: INFO: Got endpoints: latency-svc-jxxl2 [829.847356ms] Oct 14 14:05:04.751: INFO: Created: latency-svc-d6p7r Oct 14 14:05:04.755: INFO: Got endpoints: latency-svc-d6p7r [855.289972ms] Oct 14 14:05:04.779: INFO: Created: latency-svc-527ct Oct 14 14:05:04.803: INFO: Got endpoints: latency-svc-527ct [871.381443ms] Oct 14 14:05:04.826: INFO: Created: latency-svc-wmfxn Oct 14 14:05:04.842: INFO: Got endpoints: latency-svc-wmfxn [825.576694ms] Oct 14 14:05:04.900: INFO: Created: latency-svc-d6dsv Oct 14 14:05:04.913: INFO: Got endpoints: latency-svc-d6dsv [867.549471ms] Oct 14 14:05:04.950: INFO: Created: latency-svc-b4hww Oct 14 14:05:04.969: INFO: Got endpoints: latency-svc-b4hww [832.707569ms] Oct 14 14:05:05.044: INFO: Created: latency-svc-wssq9 Oct 14 14:05:05.065: INFO: Got endpoints: latency-svc-wssq9 [885.006669ms] Oct 14 14:05:05.092: INFO: Created: latency-svc-z92pn Oct 14 14:05:05.108: INFO: Got endpoints: latency-svc-z92pn [892.980174ms] Oct 14 14:05:05.183: INFO: Created: latency-svc-6mhdf Oct 14 14:05:05.204: INFO: Got endpoints: latency-svc-6mhdf [914.606437ms] Oct 14 14:05:05.223: INFO: Created: latency-svc-78rvb Oct 14 14:05:05.234: INFO: Got endpoints: latency-svc-78rvb [804.610213ms] Oct 14 14:05:05.252: INFO: Created: latency-svc-4slv4 Oct 14 14:05:05.263: INFO: Got endpoints: latency-svc-4slv4 [827.099749ms] Oct 14 14:05:05.325: INFO: Created: latency-svc-ptcsn Oct 14 14:05:05.335: INFO: Got endpoints: latency-svc-ptcsn [868.320381ms] Oct 14 14:05:05.354: INFO: Created: latency-svc-nrhvl Oct 14 14:05:05.366: INFO: Got endpoints: latency-svc-nrhvl [868.002547ms] Oct 14 14:05:05.392: INFO: Created: latency-svc-s2z8z Oct 14 14:05:05.402: INFO: Got endpoints: latency-svc-s2z8z [807.553764ms] Oct 14 14:05:05.421: INFO: Created: latency-svc-k84qc Oct 14 14:05:05.482: INFO: Got endpoints: latency-svc-k84qc [845.966793ms] Oct 14 14:05:05.492: INFO: Created: latency-svc-lmgxv Oct 14 14:05:05.517: INFO: Got endpoints: latency-svc-lmgxv [843.566173ms] Oct 14 14:05:05.553: INFO: Created: latency-svc-xwl84 Oct 14 14:05:05.573: INFO: Got endpoints: latency-svc-xwl84 [817.433212ms] Oct 14 14:05:05.618: INFO: Created: latency-svc-9p8f8 Oct 14 14:05:05.660: INFO: Got endpoints: latency-svc-9p8f8 [857.599519ms] Oct 14 14:05:05.701: INFO: Created: latency-svc-7lpn7 Oct 14 14:05:05.756: INFO: Got endpoints: latency-svc-7lpn7 [913.540154ms] Oct 14 14:05:05.762: INFO: Created: latency-svc-76hl5 Oct 14 14:05:05.776: INFO: Got endpoints: latency-svc-76hl5 [862.500594ms] Oct 14 14:05:05.799: INFO: Created: latency-svc-4xsqh Oct 14 14:05:05.813: INFO: Got endpoints: latency-svc-4xsqh [843.045068ms] Oct 14 14:05:05.828: INFO: Created: latency-svc-td5gd Oct 14 14:05:05.852: INFO: Got endpoints: latency-svc-td5gd [786.966749ms] Oct 14 14:05:05.913: INFO: Created: latency-svc-gptqw Oct 14 14:05:05.921: INFO: Got endpoints: latency-svc-gptqw [812.855127ms] Oct 14 14:05:05.942: INFO: Created: latency-svc-rphnj Oct 14 14:05:05.957: INFO: Got endpoints: latency-svc-rphnj [752.681271ms] Oct 14 14:05:05.973: INFO: Created: latency-svc-6gbth Oct 14 14:05:05.998: INFO: Got endpoints: latency-svc-6gbth [764.124457ms] Oct 14 14:05:06.057: INFO: Created: latency-svc-h6llc Oct 14 14:05:06.079: INFO: Got endpoints: latency-svc-h6llc [815.646306ms] Oct 14 14:05:06.153: INFO: Created: latency-svc-hjq74 Oct 14 14:05:06.188: INFO: Got endpoints: latency-svc-hjq74 [852.719869ms] Oct 14 14:05:06.214: INFO: Created: latency-svc-6fz5s Oct 14 14:05:06.243: INFO: Got endpoints: latency-svc-6fz5s [876.56521ms] Oct 14 14:05:06.380: INFO: Created: latency-svc-rgpbv Oct 14 14:05:06.384: INFO: Got endpoints: latency-svc-rgpbv [981.770854ms] Oct 14 14:05:06.433: INFO: Created: latency-svc-7dpcb Oct 14 14:05:06.447: INFO: Got endpoints: latency-svc-7dpcb [964.99482ms] Oct 14 14:05:06.535: INFO: Created: latency-svc-p9pst Oct 14 14:05:06.561: INFO: Got endpoints: latency-svc-p9pst [1.043989991s] Oct 14 14:05:06.562: INFO: Created: latency-svc-bl2vc Oct 14 14:05:06.574: INFO: Got endpoints: latency-svc-bl2vc [1.000979777s] Oct 14 14:05:06.613: INFO: Created: latency-svc-lhkx7 Oct 14 14:05:06.627: INFO: Got endpoints: latency-svc-lhkx7 [965.883581ms] Oct 14 14:05:06.666: INFO: Created: latency-svc-kbshn Oct 14 14:05:06.675: INFO: Got endpoints: latency-svc-kbshn [919.056465ms] Oct 14 14:05:06.698: INFO: Created: latency-svc-qfkx6 Oct 14 14:05:06.712: INFO: Got endpoints: latency-svc-qfkx6 [936.079408ms] Oct 14 14:05:06.729: INFO: Created: latency-svc-55xwz Oct 14 14:05:06.743: INFO: Got endpoints: latency-svc-55xwz [929.914414ms] Oct 14 14:05:06.759: INFO: Created: latency-svc-f4zcj Oct 14 14:05:06.801: INFO: Got endpoints: latency-svc-f4zcj [949.356782ms] Oct 14 14:05:06.812: INFO: Created: latency-svc-gp96z Oct 14 14:05:06.827: INFO: Got endpoints: latency-svc-gp96z [905.469432ms] Oct 14 14:05:06.842: INFO: Created: latency-svc-xjssp Oct 14 14:05:06.851: INFO: Got endpoints: latency-svc-xjssp [893.490347ms] Oct 14 14:05:06.865: INFO: Created: latency-svc-7db8n Oct 14 14:05:06.875: INFO: Got endpoints: latency-svc-7db8n [876.713093ms] Oct 14 14:05:06.891: INFO: Created: latency-svc-2kwc4 Oct 14 14:05:06.978: INFO: Got endpoints: latency-svc-2kwc4 [898.870933ms] Oct 14 14:05:07.016: INFO: Created: latency-svc-wh9ld Oct 14 14:05:07.022: INFO: Got endpoints: latency-svc-wh9ld [832.956182ms] Oct 14 14:05:07.140: INFO: Created: latency-svc-4hxfk Oct 14 14:05:07.145: INFO: Got endpoints: latency-svc-4hxfk [901.704338ms] Oct 14 14:05:07.161: INFO: Created: latency-svc-w87l7 Oct 14 14:05:07.190: INFO: Got endpoints: latency-svc-w87l7 [805.167288ms] Oct 14 14:05:07.219: INFO: Created: latency-svc-dwk5s Oct 14 14:05:07.230: INFO: Got endpoints: latency-svc-dwk5s [782.822516ms] Oct 14 14:05:07.278: INFO: Created: latency-svc-kpb6d Oct 14 14:05:07.294: INFO: Got endpoints: latency-svc-kpb6d [733.229371ms] Oct 14 14:05:07.317: INFO: Created: latency-svc-j5hxw Oct 14 14:05:07.328: INFO: Got endpoints: latency-svc-j5hxw [753.732946ms] Oct 14 14:05:07.346: INFO: Created: latency-svc-wqd9x Oct 14 14:05:07.361: INFO: Got endpoints: latency-svc-wqd9x [734.328676ms] Oct 14 14:05:07.376: INFO: Created: latency-svc-htp68 Oct 14 14:05:07.433: INFO: Got endpoints: latency-svc-htp68 [757.381836ms] Oct 14 14:05:07.461: INFO: Created: latency-svc-ktgf2 Oct 14 14:05:07.473: INFO: Got endpoints: latency-svc-ktgf2 [760.301016ms] Oct 14 14:05:07.491: INFO: Created: latency-svc-4768f Oct 14 14:05:07.517: INFO: Got endpoints: latency-svc-4768f [773.970617ms] Oct 14 14:05:07.583: INFO: Created: latency-svc-dwfjp Oct 14 14:05:07.610: INFO: Got endpoints: latency-svc-dwfjp [808.370463ms] Oct 14 14:05:07.610: INFO: Created: latency-svc-z4hsh Oct 14 14:05:07.635: INFO: Got endpoints: latency-svc-z4hsh [808.058605ms] Oct 14 14:05:07.664: INFO: Created: latency-svc-f9swl Oct 14 14:05:07.677: INFO: Got endpoints: latency-svc-f9swl [826.059984ms] Oct 14 14:05:07.714: INFO: Created: latency-svc-2vbzx Oct 14 14:05:07.719: INFO: Got endpoints: latency-svc-2vbzx [843.545732ms] Oct 14 14:05:07.773: INFO: Created: latency-svc-kg4gx Oct 14 14:05:07.792: INFO: Got endpoints: latency-svc-kg4gx [813.178698ms] Oct 14 14:05:07.809: INFO: Created: latency-svc-bbdbb Oct 14 14:05:07.839: INFO: Got endpoints: latency-svc-bbdbb [817.595029ms] Oct 14 14:05:07.862: INFO: Created: latency-svc-2xstt Oct 14 14:05:07.871: INFO: Got endpoints: latency-svc-2xstt [725.603768ms] Oct 14 14:05:07.887: INFO: Created: latency-svc-xrgd2 Oct 14 14:05:07.912: INFO: Got endpoints: latency-svc-xrgd2 [722.141656ms] Oct 14 14:05:07.979: INFO: Created: latency-svc-8qjmf Oct 14 14:05:07.985: INFO: Got endpoints: latency-svc-8qjmf [754.832207ms] Oct 14 14:05:08.006: INFO: Created: latency-svc-fjcl9 Oct 14 14:05:08.022: INFO: Got endpoints: latency-svc-fjcl9 [727.568937ms] Oct 14 14:05:08.042: INFO: Created: latency-svc-cmtc8 Oct 14 14:05:08.065: INFO: Got endpoints: latency-svc-cmtc8 [736.344522ms] Oct 14 14:05:08.142: INFO: Created: latency-svc-5mvhb Oct 14 14:05:08.144: INFO: Got endpoints: latency-svc-5mvhb [782.722977ms] Oct 14 14:05:08.162: INFO: Created: latency-svc-cnxs2 Oct 14 14:05:08.178: INFO: Got endpoints: latency-svc-cnxs2 [745.069705ms] Oct 14 14:05:08.200: INFO: Created: latency-svc-65ltz Oct 14 14:05:08.216: INFO: Got endpoints: latency-svc-65ltz [742.610707ms] Oct 14 14:05:08.385: INFO: Created: latency-svc-6zffl Oct 14 14:05:08.390: INFO: Got endpoints: latency-svc-6zffl [872.885444ms] Oct 14 14:05:08.468: INFO: Created: latency-svc-5kcq6 Oct 14 14:05:08.479: INFO: Got endpoints: latency-svc-5kcq6 [869.011252ms] Oct 14 14:05:08.536: INFO: Created: latency-svc-9rjj5 Oct 14 14:05:08.540: INFO: Got endpoints: latency-svc-9rjj5 [904.585606ms] Oct 14 14:05:08.571: INFO: Created: latency-svc-k2kd8 Oct 14 14:05:08.601: INFO: Got endpoints: latency-svc-k2kd8 [923.808226ms] Oct 14 14:05:08.620: INFO: Created: latency-svc-4sxv5 Oct 14 14:05:08.692: INFO: Got endpoints: latency-svc-4sxv5 [972.997325ms] Oct 14 14:05:08.701: INFO: Created: latency-svc-8zn98 Oct 14 14:05:08.706: INFO: Got endpoints: latency-svc-8zn98 [914.077681ms] Oct 14 14:05:08.725: INFO: Created: latency-svc-f9ct7 Oct 14 14:05:08.739: INFO: Got endpoints: latency-svc-f9ct7 [899.461387ms] Oct 14 14:05:08.756: INFO: Created: latency-svc-xtcxd Oct 14 14:05:08.769: INFO: Got endpoints: latency-svc-xtcxd [898.312825ms] Oct 14 14:05:08.787: INFO: Created: latency-svc-hb98d Oct 14 14:05:08.834: INFO: Got endpoints: latency-svc-hb98d [921.5416ms] Oct 14 14:05:08.835: INFO: Created: latency-svc-d245l Oct 14 14:05:08.855: INFO: Got endpoints: latency-svc-d245l [869.063883ms] Oct 14 14:05:08.870: INFO: Created: latency-svc-xt6f9 Oct 14 14:05:08.888: INFO: Got endpoints: latency-svc-xt6f9 [865.70873ms] Oct 14 14:05:08.913: INFO: Created: latency-svc-jv5tf Oct 14 14:05:08.992: INFO: Got endpoints: latency-svc-jv5tf [926.95704ms] Oct 14 14:05:08.996: INFO: Created: latency-svc-2ln5f Oct 14 14:05:09.022: INFO: Got endpoints: latency-svc-2ln5f [877.508764ms] Oct 14 14:05:09.044: INFO: Created: latency-svc-5rl4s Oct 14 14:05:09.064: INFO: Got endpoints: latency-svc-5rl4s [885.842598ms] Oct 14 14:05:09.177: INFO: Created: latency-svc-gzzlc Oct 14 14:05:09.194: INFO: Got endpoints: latency-svc-gzzlc [977.65402ms] Oct 14 14:05:09.205: INFO: Created: latency-svc-xf6jz Oct 14 14:05:09.220: INFO: Got endpoints: latency-svc-xf6jz [829.964375ms] Oct 14 14:05:09.237: INFO: Created: latency-svc-8vlbw Oct 14 14:05:09.262: INFO: Got endpoints: latency-svc-8vlbw [782.295101ms] Oct 14 14:05:09.339: INFO: Created: latency-svc-g2z9c Oct 14 14:05:09.341: INFO: Got endpoints: latency-svc-g2z9c [800.375786ms] Oct 14 14:05:09.375: INFO: Created: latency-svc-zbsh6 Oct 14 14:05:09.389: INFO: Got endpoints: latency-svc-zbsh6 [787.458167ms] Oct 14 14:05:09.410: INFO: Created: latency-svc-kvqhz Oct 14 14:05:09.487: INFO: Got endpoints: latency-svc-kvqhz [794.25584ms] Oct 14 14:05:09.494: INFO: Created: latency-svc-rmtx4 Oct 14 14:05:09.510: INFO: Got endpoints: latency-svc-rmtx4 [803.217843ms] Oct 14 14:05:09.530: INFO: Created: latency-svc-5jqq4 Oct 14 14:05:09.556: INFO: Got endpoints: latency-svc-5jqq4 [816.06709ms] Oct 14 14:05:09.580: INFO: Created: latency-svc-6v24d Oct 14 14:05:09.619: INFO: Got endpoints: latency-svc-6v24d [849.445475ms] Oct 14 14:05:09.639: INFO: Created: latency-svc-zvwnv Oct 14 14:05:09.649: INFO: Got endpoints: latency-svc-zvwnv [814.816992ms] Oct 14 14:05:09.670: INFO: Created: latency-svc-6cr95 Oct 14 14:05:09.699: INFO: Got endpoints: latency-svc-6cr95 [844.294199ms] Oct 14 14:05:09.772: INFO: Created: latency-svc-68pxm Oct 14 14:05:09.794: INFO: Got endpoints: latency-svc-68pxm [905.999854ms] Oct 14 14:05:09.818: INFO: Created: latency-svc-d8rr5 Oct 14 14:05:09.850: INFO: Got endpoints: latency-svc-d8rr5 [857.414366ms] Oct 14 14:05:09.918: INFO: Created: latency-svc-cm4hd Oct 14 14:05:09.945: INFO: Created: latency-svc-p7htr Oct 14 14:05:09.946: INFO: Got endpoints: latency-svc-cm4hd [923.7512ms] Oct 14 14:05:09.956: INFO: Got endpoints: latency-svc-p7htr [890.878379ms] Oct 14 14:05:09.974: INFO: Created: latency-svc-75fwd Oct 14 14:05:09.987: INFO: Got endpoints: latency-svc-75fwd [792.780896ms] Oct 14 14:05:10.003: INFO: Created: latency-svc-sm6nz Oct 14 14:05:10.016: INFO: Got endpoints: latency-svc-sm6nz [795.586792ms] Oct 14 14:05:10.098: INFO: Created: latency-svc-lsm4p Oct 14 14:05:10.106: INFO: Got endpoints: latency-svc-lsm4p [844.334749ms] Oct 14 14:05:10.132: INFO: Created: latency-svc-g429j Oct 14 14:05:10.144: INFO: Got endpoints: latency-svc-g429j [802.828944ms] Oct 14 14:05:10.166: INFO: Created: latency-svc-k8d99 Oct 14 14:05:10.195: INFO: Got endpoints: latency-svc-k8d99 [806.352975ms] Oct 14 14:05:10.273: INFO: Created: latency-svc-g9l88 Oct 14 14:05:10.289: INFO: Got endpoints: latency-svc-g9l88 [801.912565ms] Oct 14 14:05:10.311: INFO: Created: latency-svc-659l7 Oct 14 14:05:10.325: INFO: Got endpoints: latency-svc-659l7 [815.16712ms] Oct 14 14:05:10.340: INFO: Created: latency-svc-fs97n Oct 14 14:05:10.355: INFO: Got endpoints: latency-svc-fs97n [799.194854ms] Oct 14 14:05:10.433: INFO: Created: latency-svc-hxsh8 Oct 14 14:05:10.473: INFO: Created: latency-svc-lrbxh Oct 14 14:05:10.474: INFO: Got endpoints: latency-svc-hxsh8 [854.975737ms] Oct 14 14:05:10.497: INFO: Got endpoints: latency-svc-lrbxh [847.637033ms] Oct 14 14:05:10.584: INFO: Created: latency-svc-t2zkw Oct 14 14:05:10.622: INFO: Got endpoints: latency-svc-t2zkw [922.526096ms] Oct 14 14:05:10.623: INFO: Created: latency-svc-78q2q Oct 14 14:05:10.647: INFO: Got endpoints: latency-svc-78q2q [852.407213ms] Oct 14 14:05:10.763: INFO: Created: latency-svc-jtkh9 Oct 14 14:05:10.786: INFO: Got endpoints: latency-svc-jtkh9 [935.644212ms] Oct 14 14:05:10.791: INFO: Created: latency-svc-76tjv Oct 14 14:05:10.809: INFO: Got endpoints: latency-svc-76tjv [862.71971ms] Oct 14 14:05:10.831: INFO: Created: latency-svc-99hmk Oct 14 14:05:10.844: INFO: Got endpoints: latency-svc-99hmk [887.827518ms] Oct 14 14:05:10.861: INFO: Created: latency-svc-gxb2g Oct 14 14:05:10.895: INFO: Got endpoints: latency-svc-gxb2g [907.718239ms] Oct 14 14:05:10.946: INFO: Created: latency-svc-9ptml Oct 14 14:05:10.963: INFO: Got endpoints: latency-svc-9ptml [947.065809ms] Oct 14 14:05:10.989: INFO: Created: latency-svc-mrnsf Oct 14 14:05:11.087: INFO: Got endpoints: latency-svc-mrnsf [980.504061ms] Oct 14 14:05:11.090: INFO: Created: latency-svc-9rz54 Oct 14 14:05:11.091: INFO: Got endpoints: latency-svc-9rz54 [947.01189ms] Oct 14 14:05:11.182: INFO: Created: latency-svc-6t7lk Oct 14 14:05:11.273: INFO: Got endpoints: latency-svc-6t7lk [1.077080611s] Oct 14 14:05:11.294: INFO: Created: latency-svc-rb89z Oct 14 14:05:11.307: INFO: Got endpoints: latency-svc-rb89z [1.017796752s] Oct 14 14:05:11.327: INFO: Created: latency-svc-gxlb8 Oct 14 14:05:11.337: INFO: Got endpoints: latency-svc-gxlb8 [1.011695202s] Oct 14 14:05:11.354: INFO: Created: latency-svc-hfcbr Oct 14 14:05:11.368: INFO: Got endpoints: latency-svc-hfcbr [1.013041685s] Oct 14 14:05:11.428: INFO: Created: latency-svc-hbd9q Oct 14 14:05:11.451: INFO: Got endpoints: latency-svc-hbd9q [976.558686ms] Oct 14 14:05:11.451: INFO: Created: latency-svc-hqwv7 Oct 14 14:05:11.493: INFO: Got endpoints: latency-svc-hqwv7 [995.861263ms] Oct 14 14:05:11.584: INFO: Created: latency-svc-t5l9r Oct 14 14:05:11.586: INFO: Got endpoints: latency-svc-t5l9r [964.079346ms] Oct 14 14:05:11.630: INFO: Created: latency-svc-98f8s Oct 14 14:05:11.658: INFO: Got endpoints: latency-svc-98f8s [1.011247394s] Oct 14 14:05:11.672: INFO: Created: latency-svc-zffgr Oct 14 14:05:11.733: INFO: Got endpoints: latency-svc-zffgr [947.449983ms] Oct 14 14:05:11.743: INFO: Created: latency-svc-b8gp6 Oct 14 14:05:11.760: INFO: Got endpoints: latency-svc-b8gp6 [951.185527ms] Oct 14 14:05:11.798: INFO: Created: latency-svc-dzx45 Oct 14 14:05:11.809: INFO: Got endpoints: latency-svc-dzx45 [964.832169ms] Oct 14 14:05:11.882: INFO: Created: latency-svc-zplz7 Oct 14 14:05:11.887: INFO: Got endpoints: latency-svc-zplz7 [992.031386ms] Oct 14 14:05:11.929: INFO: Created: latency-svc-gm4s4 Oct 14 14:05:11.940: INFO: Got endpoints: latency-svc-gm4s4 [975.940574ms] Oct 14 14:05:11.972: INFO: Created: latency-svc-nb77s Oct 14 14:05:12.039: INFO: Got endpoints: latency-svc-nb77s [947.861283ms] Oct 14 14:05:12.051: INFO: Created: latency-svc-9xwx7 Oct 14 14:05:12.079: INFO: Got endpoints: latency-svc-9xwx7 [991.86995ms] Oct 14 14:05:12.098: INFO: Created: latency-svc-wvtpv Oct 14 14:05:12.199: INFO: Got endpoints: latency-svc-wvtpv [926.39208ms] Oct 14 14:05:12.205: INFO: Created: latency-svc-zjsdb Oct 14 14:05:12.217: INFO: Got endpoints: latency-svc-zjsdb [909.645312ms] Oct 14 14:05:12.255: INFO: Created: latency-svc-42gzh Oct 14 14:05:12.265: INFO: Got endpoints: latency-svc-42gzh [928.29239ms] Oct 14 14:05:12.299: INFO: Created: latency-svc-fd6fj Oct 14 14:05:12.351: INFO: Got endpoints: latency-svc-fd6fj [982.712612ms] Oct 14 14:05:12.353: INFO: Created: latency-svc-4vzjw Oct 14 14:05:12.362: INFO: Got endpoints: latency-svc-4vzjw [910.475379ms] Oct 14 14:05:12.398: INFO: Created: latency-svc-kct8h Oct 14 14:05:12.423: INFO: Got endpoints: latency-svc-kct8h [929.53015ms] Oct 14 14:05:12.447: INFO: Created: latency-svc-f4cdq Oct 14 14:05:12.516: INFO: Got endpoints: latency-svc-f4cdq [930.022665ms] Oct 14 14:05:12.518: INFO: Created: latency-svc-vkc8d Oct 14 14:05:12.549: INFO: Got endpoints: latency-svc-vkc8d [890.104435ms] Oct 14 14:05:12.596: INFO: Created: latency-svc-sb8nv Oct 14 14:05:12.609: INFO: Got endpoints: latency-svc-sb8nv [875.226627ms] Oct 14 14:05:12.655: INFO: Created: latency-svc-q7j54 Oct 14 14:05:12.658: INFO: Got endpoints: latency-svc-q7j54 [897.521569ms] Oct 14 14:05:12.686: INFO: Created: latency-svc-mtw95 Oct 14 14:05:12.699: INFO: Got endpoints: latency-svc-mtw95 [889.786137ms] Oct 14 14:05:12.700: INFO: Latencies: [67.338112ms 137.208647ms 223.457997ms 282.028138ms 302.742774ms 366.939912ms 439.667863ms 514.077122ms 563.748017ms 592.716514ms 683.06034ms 715.699164ms 722.141656ms 725.603768ms 727.568937ms 733.229371ms 734.328676ms 736.344522ms 742.610707ms 745.069705ms 752.681271ms 753.732946ms 754.832207ms 757.381836ms 760.301016ms 764.124457ms 773.970617ms 782.295101ms 782.722977ms 782.822516ms 786.966749ms 787.458167ms 792.780896ms 794.25584ms 795.586792ms 799.194854ms 800.375786ms 801.912565ms 802.828944ms 803.217843ms 804.610213ms 804.759722ms 805.167288ms 806.352975ms 807.553764ms 808.058605ms 808.370463ms 809.839834ms 812.855127ms 813.178698ms 814.816992ms 815.16712ms 815.646306ms 816.06709ms 817.433212ms 817.595029ms 825.576694ms 826.059984ms 827.099749ms 829.847356ms 829.964375ms 831.553293ms 832.707569ms 832.956182ms 843.045068ms 843.545732ms 843.566173ms 844.294199ms 844.334749ms 845.966793ms 847.637033ms 849.445475ms 849.832662ms 852.407213ms 852.719869ms 854.975737ms 855.289972ms 857.414366ms 857.599519ms 859.483213ms 862.500594ms 862.71971ms 863.980706ms 865.70873ms 867.549471ms 868.002547ms 868.320381ms 869.011252ms 869.063883ms 871.381443ms 872.885444ms 875.226627ms 876.56521ms 876.713093ms 877.508764ms 885.006669ms 885.795652ms 885.842598ms 886.334872ms 887.827518ms 889.786137ms 890.104435ms 890.878379ms 892.980174ms 893.490347ms 897.521569ms 898.312825ms 898.759659ms 898.870933ms 899.461387ms 901.704338ms 901.829542ms 902.802479ms 904.585606ms 904.619803ms 905.469432ms 905.999854ms 906.92877ms 907.257169ms 907.409156ms 907.718239ms 908.25424ms 909.645312ms 909.706675ms 910.475379ms 913.286397ms 913.540154ms 914.077681ms 914.606437ms 919.056465ms 921.5416ms 922.526096ms 923.7512ms 923.808226ms 926.39208ms 926.766271ms 926.95704ms 928.29239ms 929.53015ms 929.914414ms 930.022665ms 931.727183ms 935.644212ms 936.079408ms 938.539869ms 938.762966ms 939.947239ms 941.291948ms 942.341508ms 945.971346ms 947.01189ms 947.065809ms 947.449983ms 947.861283ms 949.356782ms 951.185527ms 952.973859ms 953.410983ms 957.167915ms 958.795163ms 959.669669ms 959.845715ms 963.714758ms 964.079346ms 964.431021ms 964.832169ms 964.99482ms 965.883581ms 966.810489ms 969.088898ms 972.997325ms 974.184776ms 975.940574ms 976.558686ms 977.65402ms 980.504061ms 981.770854ms 982.160892ms 982.712612ms 983.684871ms 983.700382ms 985.438623ms 991.86995ms 992.031386ms 994.721445ms 995.320263ms 995.861263ms 1.000979777s 1.008403357s 1.011247394s 1.011695202s 1.013041685s 1.014531307s 1.017796752s 1.029213775s 1.031580186s 1.040904966s 1.043989991s 1.077080611s 1.077186566s] Oct 14 14:05:12.702: INFO: 50 %ile: 889.786137ms Oct 14 14:05:12.702: INFO: 90 %ile: 983.700382ms Oct 14 14:05:12.702: INFO: 99 %ile: 1.077080611s Oct 14 14:05:12.702: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:05:12.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-2746" for this suite. • [SLOW TEST:17.029 seconds] [sig-network] Service endpoints latency /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":303,"completed":80,"skipped":1172,"failed":0} S ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:05:12.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Oct 14 14:05:12.821: INFO: PodSpec: initContainers in spec.initContainers Oct 14 14:06:08.627: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-784fa743-c182-4506-9aff-f66e2a38ab6d", GenerateName:"", Namespace:"init-container-7535", SelfLink:"/api/v1/namespaces/init-container-7535/pods/pod-init-784fa743-c182-4506-9aff-f66e2a38ab6d", UID:"4fa23c3c-eed0-4574-86db-543d61511862", ResourceVersion:"1135913", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63738281112, loc:(*time.Location)(0x5d1d160)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"820366076"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0x79adcc0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x7f778b0)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0x79adce0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x7f778c0)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-gjw2p", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0x79add00), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-gjw2p", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-gjw2p", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-gjw2p", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x8bb4c58), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x8efa940), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0x8bb4d10)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0x8bb4d30)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0x8bb4d38), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0x8bb4d3c), PreemptionPolicy:(*v1.PreemptionPolicy)(0x79165c8), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738281112, loc:(*time.Location)(0x5d1d160)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738281112, loc:(*time.Location)(0x5d1d160)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738281112, loc:(*time.Location)(0x5d1d160)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738281112, loc:(*time.Location)(0x5d1d160)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.15", PodIP:"10.244.2.246", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.246"}}, StartTime:(*v1.Time)(0x79adda0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0x79addc0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0x7ea8690)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://c13011aa774a8df7fa0d49caa76900197f2212f9baf3327c77ea92d1bd4810f3", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0x7f778f0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0x7f778d0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0x8bb4dbf)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:06:08.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7535" for this suite. • [SLOW TEST:55.948 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":303,"completed":81,"skipped":1173,"failed":0} SSSS ------------------------------ [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Discovery /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:06:08.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename discovery STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Discovery /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:39 STEP: Setting up server cert [It] should validate PreferredVersion for each APIGroup [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 14:06:29.060: INFO: Checking APIGroup: apiregistration.k8s.io Oct 14 14:06:29.064: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 Oct 14 14:06:29.064: INFO: Versions found [{apiregistration.k8s.io/v1 v1} {apiregistration.k8s.io/v1beta1 v1beta1}] Oct 14 14:06:29.064: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 Oct 14 14:06:29.064: INFO: Checking APIGroup: extensions Oct 14 14:06:29.067: INFO: PreferredVersion.GroupVersion: extensions/v1beta1 Oct 14 14:06:29.067: INFO: Versions found [{extensions/v1beta1 v1beta1}] Oct 14 14:06:29.067: INFO: extensions/v1beta1 matches extensions/v1beta1 Oct 14 14:06:29.067: INFO: Checking APIGroup: apps Oct 14 14:06:29.071: INFO: PreferredVersion.GroupVersion: apps/v1 Oct 14 14:06:29.071: INFO: Versions found [{apps/v1 v1}] Oct 14 14:06:29.071: INFO: apps/v1 matches apps/v1 Oct 14 14:06:29.071: INFO: Checking APIGroup: events.k8s.io Oct 14 14:06:29.074: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 Oct 14 14:06:29.074: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] Oct 14 14:06:29.074: INFO: events.k8s.io/v1 matches events.k8s.io/v1 Oct 14 14:06:29.074: INFO: Checking APIGroup: authentication.k8s.io Oct 14 14:06:29.077: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 Oct 14 14:06:29.077: INFO: Versions found [{authentication.k8s.io/v1 v1} {authentication.k8s.io/v1beta1 v1beta1}] Oct 14 14:06:29.077: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 Oct 14 14:06:29.077: INFO: Checking APIGroup: authorization.k8s.io Oct 14 14:06:29.079: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 Oct 14 14:06:29.079: INFO: Versions found [{authorization.k8s.io/v1 v1} {authorization.k8s.io/v1beta1 v1beta1}] Oct 14 14:06:29.079: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 Oct 14 14:06:29.079: INFO: Checking APIGroup: autoscaling Oct 14 14:06:29.081: INFO: PreferredVersion.GroupVersion: autoscaling/v1 Oct 14 14:06:29.081: INFO: Versions found [{autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] Oct 14 14:06:29.081: INFO: autoscaling/v1 matches autoscaling/v1 Oct 14 14:06:29.081: INFO: Checking APIGroup: batch Oct 14 14:06:29.083: INFO: PreferredVersion.GroupVersion: batch/v1 Oct 14 14:06:29.083: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1}] Oct 14 14:06:29.084: INFO: batch/v1 matches batch/v1 Oct 14 14:06:29.084: INFO: Checking APIGroup: certificates.k8s.io Oct 14 14:06:29.086: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 Oct 14 14:06:29.086: INFO: Versions found [{certificates.k8s.io/v1 v1} {certificates.k8s.io/v1beta1 v1beta1}] Oct 14 14:06:29.086: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 Oct 14 14:06:29.086: INFO: Checking APIGroup: networking.k8s.io Oct 14 14:06:29.088: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 Oct 14 14:06:29.088: INFO: Versions found [{networking.k8s.io/v1 v1} {networking.k8s.io/v1beta1 v1beta1}] Oct 14 14:06:29.088: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 Oct 14 14:06:29.088: INFO: Checking APIGroup: policy Oct 14 14:06:29.090: INFO: PreferredVersion.GroupVersion: policy/v1beta1 Oct 14 14:06:29.090: INFO: Versions found [{policy/v1beta1 v1beta1}] Oct 14 14:06:29.090: INFO: policy/v1beta1 matches policy/v1beta1 Oct 14 14:06:29.090: INFO: Checking APIGroup: rbac.authorization.k8s.io Oct 14 14:06:29.092: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 Oct 14 14:06:29.092: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1} {rbac.authorization.k8s.io/v1beta1 v1beta1}] Oct 14 14:06:29.092: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 Oct 14 14:06:29.092: INFO: Checking APIGroup: storage.k8s.io Oct 14 14:06:29.094: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 Oct 14 14:06:29.094: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] Oct 14 14:06:29.094: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 Oct 14 14:06:29.094: INFO: Checking APIGroup: admissionregistration.k8s.io Oct 14 14:06:29.096: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 Oct 14 14:06:29.097: INFO: Versions found [{admissionregistration.k8s.io/v1 v1} {admissionregistration.k8s.io/v1beta1 v1beta1}] Oct 14 14:06:29.097: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 Oct 14 14:06:29.097: INFO: Checking APIGroup: apiextensions.k8s.io Oct 14 14:06:29.099: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 Oct 14 14:06:29.099: INFO: Versions found [{apiextensions.k8s.io/v1 v1} {apiextensions.k8s.io/v1beta1 v1beta1}] Oct 14 14:06:29.099: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 Oct 14 14:06:29.099: INFO: Checking APIGroup: scheduling.k8s.io Oct 14 14:06:29.101: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 Oct 14 14:06:29.102: INFO: Versions found [{scheduling.k8s.io/v1 v1} {scheduling.k8s.io/v1beta1 v1beta1}] Oct 14 14:06:29.102: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 Oct 14 14:06:29.102: INFO: Checking APIGroup: coordination.k8s.io Oct 14 14:06:29.103: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 Oct 14 14:06:29.104: INFO: Versions found [{coordination.k8s.io/v1 v1} {coordination.k8s.io/v1beta1 v1beta1}] Oct 14 14:06:29.104: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 Oct 14 14:06:29.104: INFO: Checking APIGroup: node.k8s.io Oct 14 14:06:29.106: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1beta1 Oct 14 14:06:29.106: INFO: Versions found [{node.k8s.io/v1beta1 v1beta1}] Oct 14 14:06:29.106: INFO: node.k8s.io/v1beta1 matches node.k8s.io/v1beta1 Oct 14 14:06:29.106: INFO: Checking APIGroup: discovery.k8s.io Oct 14 14:06:29.108: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1beta1 Oct 14 14:06:29.108: INFO: Versions found [{discovery.k8s.io/v1beta1 v1beta1}] Oct 14 14:06:29.108: INFO: discovery.k8s.io/v1beta1 matches discovery.k8s.io/v1beta1 [AfterEach] [sig-api-machinery] Discovery /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:06:29.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "discovery-8533" for this suite. • [SLOW TEST:20.451 seconds] [sig-api-machinery] Discovery /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should validate PreferredVersion for each APIGroup [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":303,"completed":82,"skipped":1177,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:06:29.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Oct 14 14:06:29.321: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 14:06:29.355: INFO: Number of nodes with available pods: 0 Oct 14 14:06:29.355: INFO: Node latest-worker is running more than one daemon pod Oct 14 14:06:30.446: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 14:06:30.453: INFO: Number of nodes with available pods: 0 Oct 14 14:06:30.453: INFO: Node latest-worker is running more than one daemon pod Oct 14 14:06:31.579: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 14:06:31.822: INFO: Number of nodes with available pods: 0 Oct 14 14:06:31.823: INFO: Node latest-worker is running more than one daemon pod Oct 14 14:06:32.365: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 14:06:32.371: INFO: Number of nodes with available pods: 0 Oct 14 14:06:32.371: INFO: Node latest-worker is running more than one daemon pod Oct 14 14:06:33.369: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 14:06:33.375: INFO: Number of nodes with available pods: 0 Oct 14 14:06:33.375: INFO: Node latest-worker is running more than one daemon pod Oct 14 14:06:34.379: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 14:06:34.388: INFO: Number of nodes with available pods: 2 Oct 14 14:06:34.388: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Oct 14 14:06:34.466: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 14:06:34.521: INFO: Number of nodes with available pods: 1 Oct 14 14:06:34.521: INFO: Node latest-worker2 is running more than one daemon pod Oct 14 14:06:35.533: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 14:06:35.539: INFO: Number of nodes with available pods: 1 Oct 14 14:06:35.539: INFO: Node latest-worker2 is running more than one daemon pod Oct 14 14:06:36.536: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 14:06:36.542: INFO: Number of nodes with available pods: 1 Oct 14 14:06:36.542: INFO: Node latest-worker2 is running more than one daemon pod Oct 14 14:06:37.537: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 14:06:37.544: INFO: Number of nodes with available pods: 2 Oct 14 14:06:37.544: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9810, will wait for the garbage collector to delete the pods Oct 14 14:06:37.652: INFO: Deleting DaemonSet.extensions daemon-set took: 39.867752ms Oct 14 14:06:38.153: INFO: Terminating DaemonSet.extensions daemon-set pods took: 501.513465ms Oct 14 14:06:45.759: INFO: Number of nodes with available pods: 0 Oct 14 14:06:45.760: INFO: Number of running nodes: 0, number of available pods: 0 Oct 14 14:06:45.806: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9810/daemonsets","resourceVersion":"1136101"},"items":null} Oct 14 14:06:45.815: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9810/pods","resourceVersion":"1136101"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:06:45.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9810" for this suite. • [SLOW TEST:16.710 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":303,"completed":83,"skipped":1192,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] server version should find the server version [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] server version /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:06:45.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename server-version STEP: Waiting for a default service account to be provisioned in namespace [It] should find the server version [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Request ServerVersion STEP: Confirm major version Oct 14 14:06:46.037: INFO: Major version: 1 STEP: Confirm minor version Oct 14 14:06:46.037: INFO: cleanMinorVersion: 19 Oct 14 14:06:46.037: INFO: Minor version: 19 [AfterEach] [sig-api-machinery] server version /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:06:46.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "server-version-9655" for this suite. •{"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":303,"completed":84,"skipped":1209,"failed":0} SSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:06:46.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service nodeport-test with type=NodePort in namespace services-3496 STEP: creating replication controller nodeport-test in namespace services-3496 I1014 14:06:46.255524 11 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-3496, replica count: 2 I1014 14:06:49.307209 11 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1014 14:06:52.308005 11 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 14 14:06:52.308: INFO: Creating new exec pod Oct 14 14:06:57.388: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-3496 execpod65bhm -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Oct 14 14:06:58.946: INFO: stderr: "I1014 14:06:58.808514 740 log.go:181] (0x247ca10) (0x247cb60) Create stream\nI1014 14:06:58.814217 740 log.go:181] (0x247ca10) (0x247cb60) Stream added, broadcasting: 1\nI1014 14:06:58.825022 740 log.go:181] (0x247ca10) Reply frame received for 1\nI1014 14:06:58.825540 740 log.go:181] (0x247ca10) (0x25cc3f0) Create stream\nI1014 14:06:58.825610 740 log.go:181] (0x247ca10) (0x25cc3f0) Stream added, broadcasting: 3\nI1014 14:06:58.827374 740 log.go:181] (0x247ca10) Reply frame received for 3\nI1014 14:06:58.827623 740 log.go:181] (0x247ca10) (0x247d260) Create stream\nI1014 14:06:58.827691 740 log.go:181] (0x247ca10) (0x247d260) Stream added, broadcasting: 5\nI1014 14:06:58.828762 740 log.go:181] (0x247ca10) Reply frame received for 5\nI1014 14:06:58.923617 740 log.go:181] (0x247ca10) Data frame received for 5\nI1014 14:06:58.924234 740 log.go:181] (0x247d260) (5) Data frame handling\nI1014 14:06:58.925456 740 log.go:181] (0x247ca10) Data frame received for 3\nI1014 14:06:58.925629 740 log.go:181] (0x25cc3f0) (3) Data frame handling\nI1014 14:06:58.926009 740 log.go:181] (0x247d260) (5) Data frame sent\nI1014 14:06:58.927499 740 log.go:181] (0x247ca10) Data frame received for 5\nI1014 14:06:58.927615 740 log.go:181] (0x247d260) (5) Data frame handling\n+ nc -zv -t -w 2 nodeport-test 80\nI1014 14:06:58.929752 740 log.go:181] (0x247ca10) Data frame received for 1\nI1014 14:06:58.929858 740 log.go:181] (0x247cb60) (1) Data frame handling\nI1014 14:06:58.929979 740 log.go:181] (0x247cb60) (1) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI1014 14:06:58.934083 740 log.go:181] (0x247d260) (5) Data frame sent\nI1014 14:06:58.934171 740 log.go:181] (0x247ca10) Data frame received for 5\nI1014 14:06:58.934254 740 log.go:181] (0x247d260) (5) Data frame handling\nI1014 14:06:58.934488 740 log.go:181] (0x247ca10) (0x247cb60) Stream removed, broadcasting: 1\nI1014 14:06:58.937200 740 log.go:181] (0x247ca10) Go away received\nI1014 14:06:58.939286 740 log.go:181] (0x247ca10) (0x247cb60) Stream removed, broadcasting: 1\nI1014 14:06:58.939437 740 log.go:181] (0x247ca10) (0x25cc3f0) Stream removed, broadcasting: 3\nI1014 14:06:58.939555 740 log.go:181] (0x247ca10) (0x247d260) Stream removed, broadcasting: 5\n" Oct 14 14:06:58.947: INFO: stdout: "" Oct 14 14:06:58.951: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-3496 execpod65bhm -- /bin/sh -x -c nc -zv -t -w 2 10.96.156.37 80' Oct 14 14:07:00.400: INFO: stderr: "I1014 14:07:00.314752 760 log.go:181] (0x27c82a0) (0x27c8310) Create stream\nI1014 14:07:00.316482 760 log.go:181] (0x27c82a0) (0x27c8310) Stream added, broadcasting: 1\nI1014 14:07:00.326213 760 log.go:181] (0x27c82a0) Reply frame received for 1\nI1014 14:07:00.326794 760 log.go:181] (0x27c82a0) (0x2a3c070) Create stream\nI1014 14:07:00.326874 760 log.go:181] (0x27c82a0) (0x2a3c070) Stream added, broadcasting: 3\nI1014 14:07:00.328636 760 log.go:181] (0x27c82a0) Reply frame received for 3\nI1014 14:07:00.329198 760 log.go:181] (0x27c82a0) (0x24eb110) Create stream\nI1014 14:07:00.329314 760 log.go:181] (0x27c82a0) (0x24eb110) Stream added, broadcasting: 5\nI1014 14:07:00.331030 760 log.go:181] (0x27c82a0) Reply frame received for 5\nI1014 14:07:00.385110 760 log.go:181] (0x27c82a0) Data frame received for 5\nI1014 14:07:00.385318 760 log.go:181] (0x24eb110) (5) Data frame handling\nI1014 14:07:00.385420 760 log.go:181] (0x27c82a0) Data frame received for 3\nI1014 14:07:00.385605 760 log.go:181] (0x2a3c070) (3) Data frame handling\nI1014 14:07:00.385918 760 log.go:181] (0x24eb110) (5) Data frame sent\nI1014 14:07:00.386073 760 log.go:181] (0x27c82a0) Data frame received for 1\nI1014 14:07:00.386158 760 log.go:181] (0x27c8310) (1) Data frame handling\nI1014 14:07:00.386227 760 log.go:181] (0x27c8310) (1) Data frame sent\n+ nc -zv -t -w 2 10.96.156.37 80\nConnection to 10.96.156.37 80 port [tcp/http] succeeded!\nI1014 14:07:00.386497 760 log.go:181] (0x27c82a0) Data frame received for 5\nI1014 14:07:00.386741 760 log.go:181] (0x24eb110) (5) Data frame handling\nI1014 14:07:00.387663 760 log.go:181] (0x27c82a0) (0x27c8310) Stream removed, broadcasting: 1\nI1014 14:07:00.389383 760 log.go:181] (0x27c82a0) Go away received\nI1014 14:07:00.391530 760 log.go:181] (0x27c82a0) (0x27c8310) Stream removed, broadcasting: 1\nI1014 14:07:00.391893 760 log.go:181] (0x27c82a0) (0x2a3c070) Stream removed, broadcasting: 3\nI1014 14:07:00.392071 760 log.go:181] (0x27c82a0) (0x24eb110) Stream removed, broadcasting: 5\n" Oct 14 14:07:00.401: INFO: stdout: "" Oct 14 14:07:00.401: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-3496 execpod65bhm -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.15 30718' Oct 14 14:07:01.895: INFO: stderr: "I1014 14:07:01.781339 780 log.go:181] (0x24dcc40) (0x24dccb0) Create stream\nI1014 14:07:01.783204 780 log.go:181] (0x24dcc40) (0x24dccb0) Stream added, broadcasting: 1\nI1014 14:07:01.798304 780 log.go:181] (0x24dcc40) Reply frame received for 1\nI1014 14:07:01.799023 780 log.go:181] (0x24dcc40) (0x29cc070) Create stream\nI1014 14:07:01.799129 780 log.go:181] (0x24dcc40) (0x29cc070) Stream added, broadcasting: 3\nI1014 14:07:01.800714 780 log.go:181] (0x24dcc40) Reply frame received for 3\nI1014 14:07:01.801102 780 log.go:181] (0x24dcc40) (0x2a26070) Create stream\nI1014 14:07:01.801184 780 log.go:181] (0x24dcc40) (0x2a26070) Stream added, broadcasting: 5\nI1014 14:07:01.804351 780 log.go:181] (0x24dcc40) Reply frame received for 5\nI1014 14:07:01.878259 780 log.go:181] (0x24dcc40) Data frame received for 5\nI1014 14:07:01.878466 780 log.go:181] (0x24dcc40) Data frame received for 3\nI1014 14:07:01.878719 780 log.go:181] (0x29cc070) (3) Data frame handling\nI1014 14:07:01.878887 780 log.go:181] (0x24dcc40) Data frame received for 1\nI1014 14:07:01.878977 780 log.go:181] (0x24dccb0) (1) Data frame handling\nI1014 14:07:01.879121 780 log.go:181] (0x2a26070) (5) Data frame handling\nI1014 14:07:01.880085 780 log.go:181] (0x24dccb0) (1) Data frame sent\n+ nc -zv -t -w 2 172.18.0.15 30718\nConnection to 172.18.0.15 30718 port [tcp/30718] succeeded!\nI1014 14:07:01.881284 780 log.go:181] (0x2a26070) (5) Data frame sent\nI1014 14:07:01.881528 780 log.go:181] (0x24dcc40) Data frame received for 5\nI1014 14:07:01.881638 780 log.go:181] (0x2a26070) (5) Data frame handling\nI1014 14:07:01.882338 780 log.go:181] (0x24dcc40) (0x24dccb0) Stream removed, broadcasting: 1\nI1014 14:07:01.883031 780 log.go:181] (0x24dcc40) Go away received\nI1014 14:07:01.886586 780 log.go:181] (0x24dcc40) (0x24dccb0) Stream removed, broadcasting: 1\nI1014 14:07:01.887018 780 log.go:181] (0x24dcc40) (0x29cc070) Stream removed, broadcasting: 3\nI1014 14:07:01.887367 780 log.go:181] (0x24dcc40) (0x2a26070) Stream removed, broadcasting: 5\n" Oct 14 14:07:01.896: INFO: stdout: "" Oct 14 14:07:01.897: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-3496 execpod65bhm -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 30718' Oct 14 14:07:03.363: INFO: stderr: "I1014 14:07:03.242264 800 log.go:181] (0x2a4a000) (0x2a4a070) Create stream\nI1014 14:07:03.246062 800 log.go:181] (0x2a4a000) (0x2a4a070) Stream added, broadcasting: 1\nI1014 14:07:03.255350 800 log.go:181] (0x2a4a000) Reply frame received for 1\nI1014 14:07:03.255885 800 log.go:181] (0x2a4a000) (0x27f2230) Create stream\nI1014 14:07:03.255961 800 log.go:181] (0x2a4a000) (0x27f2230) Stream added, broadcasting: 3\nI1014 14:07:03.257684 800 log.go:181] (0x2a4a000) Reply frame received for 3\nI1014 14:07:03.258342 800 log.go:181] (0x2a4a000) (0x2c9e070) Create stream\nI1014 14:07:03.258468 800 log.go:181] (0x2a4a000) (0x2c9e070) Stream added, broadcasting: 5\nI1014 14:07:03.259994 800 log.go:181] (0x2a4a000) Reply frame received for 5\nI1014 14:07:03.345097 800 log.go:181] (0x2a4a000) Data frame received for 3\nI1014 14:07:03.345519 800 log.go:181] (0x2a4a000) Data frame received for 1\nI1014 14:07:03.346349 800 log.go:181] (0x2a4a070) (1) Data frame handling\nI1014 14:07:03.346963 800 log.go:181] (0x2a4a000) Data frame received for 5\nI1014 14:07:03.347439 800 log.go:181] (0x27f2230) (3) Data frame handling\nI1014 14:07:03.347794 800 log.go:181] (0x2c9e070) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.14 30718\nConnection to 172.18.0.14 30718 port [tcp/30718] succeeded!\nI1014 14:07:03.349063 800 log.go:181] (0x2c9e070) (5) Data frame sent\nI1014 14:07:03.349257 800 log.go:181] (0x2a4a070) (1) Data frame sent\nI1014 14:07:03.349562 800 log.go:181] (0x2a4a000) Data frame received for 5\nI1014 14:07:03.349689 800 log.go:181] (0x2c9e070) (5) Data frame handling\nI1014 14:07:03.351554 800 log.go:181] (0x2a4a000) (0x2a4a070) Stream removed, broadcasting: 1\nI1014 14:07:03.353039 800 log.go:181] (0x2a4a000) Go away received\nI1014 14:07:03.355334 800 log.go:181] (0x2a4a000) (0x2a4a070) Stream removed, broadcasting: 1\nI1014 14:07:03.355545 800 log.go:181] (0x2a4a000) (0x27f2230) Stream removed, broadcasting: 3\nI1014 14:07:03.355710 800 log.go:181] (0x2a4a000) (0x2c9e070) Stream removed, broadcasting: 5\n" Oct 14 14:07:03.364: INFO: stdout: "" [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:07:03.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3496" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:17.329 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":303,"completed":85,"skipped":1214,"failed":0} SSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:07:03.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8671 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8671;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8671 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8671;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8671.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8671.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8671.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8671.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8671.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8671.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8671.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8671.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8671.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8671.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8671.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8671.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8671.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 88.216.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.216.88_udp@PTR;check="$$(dig +tcp +noall +answer +search 88.216.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.216.88_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8671 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8671;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8671 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8671;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8671.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8671.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8671.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8671.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8671.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8671.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8671.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8671.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8671.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8671.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8671.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8671.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8671.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 88.216.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.216.88_udp@PTR;check="$$(dig +tcp +noall +answer +search 88.216.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.216.88_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 14 14:07:09.717: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:09.722: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:09.729: INFO: Unable to read wheezy_udp@dns-test-service.dns-8671 from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:09.733: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8671 from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:09.737: INFO: Unable to read wheezy_udp@dns-test-service.dns-8671.svc from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:09.742: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8671.svc from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:09.747: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8671.svc from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:09.752: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8671.svc from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:09.785: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:09.788: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:09.792: INFO: Unable to read jessie_udp@dns-test-service.dns-8671 from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:09.796: INFO: Unable to read jessie_tcp@dns-test-service.dns-8671 from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:09.800: INFO: Unable to read jessie_udp@dns-test-service.dns-8671.svc from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:09.803: INFO: Unable to read jessie_tcp@dns-test-service.dns-8671.svc from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:09.808: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8671.svc from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:09.813: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8671.svc from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:09.843: INFO: Lookups using dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8671 wheezy_tcp@dns-test-service.dns-8671 wheezy_udp@dns-test-service.dns-8671.svc wheezy_tcp@dns-test-service.dns-8671.svc wheezy_udp@_http._tcp.dns-test-service.dns-8671.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8671.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8671 jessie_tcp@dns-test-service.dns-8671 jessie_udp@dns-test-service.dns-8671.svc jessie_tcp@dns-test-service.dns-8671.svc jessie_udp@_http._tcp.dns-test-service.dns-8671.svc jessie_tcp@_http._tcp.dns-test-service.dns-8671.svc] Oct 14 14:07:14.850: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:14.855: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:14.859: INFO: Unable to read wheezy_udp@dns-test-service.dns-8671 from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:14.864: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8671 from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:14.869: INFO: Unable to read wheezy_udp@dns-test-service.dns-8671.svc from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:14.874: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8671.svc from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:14.878: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8671.svc from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:14.882: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8671.svc from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:14.913: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:14.917: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:14.922: INFO: Unable to read jessie_udp@dns-test-service.dns-8671 from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:14.926: INFO: Unable to read jessie_tcp@dns-test-service.dns-8671 from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:14.930: INFO: Unable to read jessie_udp@dns-test-service.dns-8671.svc from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:14.935: INFO: Unable to read jessie_tcp@dns-test-service.dns-8671.svc from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:14.940: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8671.svc from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:14.944: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8671.svc from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:14.973: INFO: Lookups using dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8671 wheezy_tcp@dns-test-service.dns-8671 wheezy_udp@dns-test-service.dns-8671.svc wheezy_tcp@dns-test-service.dns-8671.svc wheezy_udp@_http._tcp.dns-test-service.dns-8671.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8671.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8671 jessie_tcp@dns-test-service.dns-8671 jessie_udp@dns-test-service.dns-8671.svc jessie_tcp@dns-test-service.dns-8671.svc jessie_udp@_http._tcp.dns-test-service.dns-8671.svc jessie_tcp@_http._tcp.dns-test-service.dns-8671.svc] Oct 14 14:07:19.852: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:19.858: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:19.862: INFO: Unable to read wheezy_udp@dns-test-service.dns-8671 from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:19.866: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8671 from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:19.871: INFO: Unable to read wheezy_udp@dns-test-service.dns-8671.svc from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:19.875: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8671.svc from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:19.878: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8671.svc from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:19.882: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8671.svc from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:19.925: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:19.929: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:19.934: INFO: Unable to read jessie_udp@dns-test-service.dns-8671 from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:19.938: INFO: Unable to read jessie_tcp@dns-test-service.dns-8671 from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:19.943: INFO: Unable to read jessie_udp@dns-test-service.dns-8671.svc from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:19.947: INFO: Unable to read jessie_tcp@dns-test-service.dns-8671.svc from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:19.951: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8671.svc from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:19.954: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8671.svc from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:19.978: INFO: Lookups using dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8671 wheezy_tcp@dns-test-service.dns-8671 wheezy_udp@dns-test-service.dns-8671.svc wheezy_tcp@dns-test-service.dns-8671.svc wheezy_udp@_http._tcp.dns-test-service.dns-8671.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8671.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8671 jessie_tcp@dns-test-service.dns-8671 jessie_udp@dns-test-service.dns-8671.svc jessie_tcp@dns-test-service.dns-8671.svc jessie_udp@_http._tcp.dns-test-service.dns-8671.svc jessie_tcp@_http._tcp.dns-test-service.dns-8671.svc] Oct 14 14:07:24.851: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:24.856: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:24.861: INFO: Unable to read wheezy_udp@dns-test-service.dns-8671 from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:24.865: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8671 from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:24.869: INFO: Unable to read wheezy_udp@dns-test-service.dns-8671.svc from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:24.873: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8671.svc from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:24.877: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8671.svc from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:24.882: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8671.svc from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:24.915: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:24.918: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:24.921: INFO: Unable to read jessie_udp@dns-test-service.dns-8671 from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:24.924: INFO: Unable to read jessie_tcp@dns-test-service.dns-8671 from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:24.927: INFO: Unable to read jessie_udp@dns-test-service.dns-8671.svc from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:24.930: INFO: Unable to read jessie_tcp@dns-test-service.dns-8671.svc from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:24.934: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8671.svc from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:24.938: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8671.svc from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:24.963: INFO: Lookups using dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8671 wheezy_tcp@dns-test-service.dns-8671 wheezy_udp@dns-test-service.dns-8671.svc wheezy_tcp@dns-test-service.dns-8671.svc wheezy_udp@_http._tcp.dns-test-service.dns-8671.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8671.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8671 jessie_tcp@dns-test-service.dns-8671 jessie_udp@dns-test-service.dns-8671.svc jessie_tcp@dns-test-service.dns-8671.svc jessie_udp@_http._tcp.dns-test-service.dns-8671.svc jessie_tcp@_http._tcp.dns-test-service.dns-8671.svc] Oct 14 14:07:29.852: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:29.857: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:29.861: INFO: Unable to read wheezy_udp@dns-test-service.dns-8671 from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:29.866: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8671 from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:29.870: INFO: Unable to read wheezy_udp@dns-test-service.dns-8671.svc from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:29.874: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8671.svc from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:29.879: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8671.svc from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:29.883: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8671.svc from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:29.979: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:29.984: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:29.988: INFO: Unable to read jessie_udp@dns-test-service.dns-8671 from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:29.992: INFO: Unable to read jessie_tcp@dns-test-service.dns-8671 from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:29.996: INFO: Unable to read jessie_udp@dns-test-service.dns-8671.svc from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:30.001: INFO: Unable to read jessie_tcp@dns-test-service.dns-8671.svc from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:30.005: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8671.svc from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:30.010: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8671.svc from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:30.038: INFO: Lookups using dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8671 wheezy_tcp@dns-test-service.dns-8671 wheezy_udp@dns-test-service.dns-8671.svc wheezy_tcp@dns-test-service.dns-8671.svc wheezy_udp@_http._tcp.dns-test-service.dns-8671.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8671.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8671 jessie_tcp@dns-test-service.dns-8671 jessie_udp@dns-test-service.dns-8671.svc jessie_tcp@dns-test-service.dns-8671.svc jessie_udp@_http._tcp.dns-test-service.dns-8671.svc jessie_tcp@_http._tcp.dns-test-service.dns-8671.svc] Oct 14 14:07:34.851: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:34.856: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:34.860: INFO: Unable to read wheezy_udp@dns-test-service.dns-8671 from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:34.864: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8671 from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:34.868: INFO: Unable to read wheezy_udp@dns-test-service.dns-8671.svc from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:34.872: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8671.svc from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:34.877: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8671.svc from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:34.882: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8671.svc from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:34.915: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:34.919: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:34.922: INFO: Unable to read jessie_udp@dns-test-service.dns-8671 from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:34.926: INFO: Unable to read jessie_tcp@dns-test-service.dns-8671 from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:34.930: INFO: Unable to read jessie_udp@dns-test-service.dns-8671.svc from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:34.934: INFO: Unable to read jessie_tcp@dns-test-service.dns-8671.svc from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:34.937: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8671.svc from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:34.942: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8671.svc from pod dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365: the server could not find the requested resource (get pods dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365) Oct 14 14:07:34.966: INFO: Lookups using dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8671 wheezy_tcp@dns-test-service.dns-8671 wheezy_udp@dns-test-service.dns-8671.svc wheezy_tcp@dns-test-service.dns-8671.svc wheezy_udp@_http._tcp.dns-test-service.dns-8671.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8671.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8671 jessie_tcp@dns-test-service.dns-8671 jessie_udp@dns-test-service.dns-8671.svc jessie_tcp@dns-test-service.dns-8671.svc jessie_udp@_http._tcp.dns-test-service.dns-8671.svc jessie_tcp@_http._tcp.dns-test-service.dns-8671.svc] Oct 14 14:07:40.023: INFO: DNS probes using dns-8671/dns-test-fa013e20-1d73-46ae-a98f-65a120b5a365 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:07:40.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8671" for this suite. • [SLOW TEST:37.552 seconds] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":303,"completed":86,"skipped":1219,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:07:40.941: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Update Demo /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:308 [It] should scale a replication controller [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller Oct 14 14:07:41.099: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-736' Oct 14 14:07:43.675: INFO: stderr: "" Oct 14 14:07:43.675: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Oct 14 14:07:43.676: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-736' Oct 14 14:07:44.951: INFO: stderr: "" Oct 14 14:07:44.951: INFO: stdout: "update-demo-nautilus-5d5z2 update-demo-nautilus-m2w6m " Oct 14 14:07:44.952: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5d5z2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-736' Oct 14 14:07:46.472: INFO: stderr: "" Oct 14 14:07:46.472: INFO: stdout: "" Oct 14 14:07:46.473: INFO: update-demo-nautilus-5d5z2 is created but not running Oct 14 14:07:51.473: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-736' Oct 14 14:07:52.739: INFO: stderr: "" Oct 14 14:07:52.740: INFO: stdout: "update-demo-nautilus-5d5z2 update-demo-nautilus-m2w6m " Oct 14 14:07:52.740: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5d5z2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-736' Oct 14 14:07:54.001: INFO: stderr: "" Oct 14 14:07:54.001: INFO: stdout: "true" Oct 14 14:07:54.002: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5d5z2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-736' Oct 14 14:07:55.253: INFO: stderr: "" Oct 14 14:07:55.253: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Oct 14 14:07:55.253: INFO: validating pod update-demo-nautilus-5d5z2 Oct 14 14:07:55.260: INFO: got data: { "image": "nautilus.jpg" } Oct 14 14:07:55.261: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 14 14:07:55.261: INFO: update-demo-nautilus-5d5z2 is verified up and running Oct 14 14:07:55.261: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m2w6m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-736' Oct 14 14:07:56.484: INFO: stderr: "" Oct 14 14:07:56.484: INFO: stdout: "true" Oct 14 14:07:56.484: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m2w6m -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-736' Oct 14 14:07:57.866: INFO: stderr: "" Oct 14 14:07:57.866: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Oct 14 14:07:57.866: INFO: validating pod update-demo-nautilus-m2w6m Oct 14 14:07:57.873: INFO: got data: { "image": "nautilus.jpg" } Oct 14 14:07:57.874: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 14 14:07:57.874: INFO: update-demo-nautilus-m2w6m is verified up and running STEP: scaling down the replication controller Oct 14 14:07:57.888: INFO: scanned /root for discovery docs: Oct 14 14:07:57.889: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-736' Oct 14 14:08:00.207: INFO: stderr: "" Oct 14 14:08:00.207: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Oct 14 14:08:00.208: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-736' Oct 14 14:08:01.491: INFO: stderr: "" Oct 14 14:08:01.492: INFO: stdout: "update-demo-nautilus-5d5z2 update-demo-nautilus-m2w6m " STEP: Replicas for name=update-demo: expected=1 actual=2 Oct 14 14:08:06.493: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-736' Oct 14 14:08:07.824: INFO: stderr: "" Oct 14 14:08:07.824: INFO: stdout: "update-demo-nautilus-m2w6m " Oct 14 14:08:07.824: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m2w6m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-736' Oct 14 14:08:09.008: INFO: stderr: "" Oct 14 14:08:09.008: INFO: stdout: "true" Oct 14 14:08:09.008: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m2w6m -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-736' Oct 14 14:08:10.317: INFO: stderr: "" Oct 14 14:08:10.317: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Oct 14 14:08:10.317: INFO: validating pod update-demo-nautilus-m2w6m Oct 14 14:08:10.322: INFO: got data: { "image": "nautilus.jpg" } Oct 14 14:08:10.323: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 14 14:08:10.323: INFO: update-demo-nautilus-m2w6m is verified up and running STEP: scaling up the replication controller Oct 14 14:08:10.331: INFO: scanned /root for discovery docs: Oct 14 14:08:10.331: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-736' Oct 14 14:08:12.813: INFO: stderr: "" Oct 14 14:08:12.813: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Oct 14 14:08:12.814: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-736' Oct 14 14:08:14.120: INFO: stderr: "" Oct 14 14:08:14.121: INFO: stdout: "update-demo-nautilus-m2w6m update-demo-nautilus-vq8v8 " Oct 14 14:08:14.121: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m2w6m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-736' Oct 14 14:08:15.449: INFO: stderr: "" Oct 14 14:08:15.449: INFO: stdout: "true" Oct 14 14:08:15.450: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m2w6m -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-736' Oct 14 14:08:16.695: INFO: stderr: "" Oct 14 14:08:16.695: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Oct 14 14:08:16.695: INFO: validating pod update-demo-nautilus-m2w6m Oct 14 14:08:16.701: INFO: got data: { "image": "nautilus.jpg" } Oct 14 14:08:16.701: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 14 14:08:16.701: INFO: update-demo-nautilus-m2w6m is verified up and running Oct 14 14:08:16.702: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vq8v8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-736' Oct 14 14:08:17.981: INFO: stderr: "" Oct 14 14:08:17.981: INFO: stdout: "true" Oct 14 14:08:17.981: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vq8v8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-736' Oct 14 14:08:19.258: INFO: stderr: "" Oct 14 14:08:19.258: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Oct 14 14:08:19.259: INFO: validating pod update-demo-nautilus-vq8v8 Oct 14 14:08:19.264: INFO: got data: { "image": "nautilus.jpg" } Oct 14 14:08:19.265: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 14 14:08:19.265: INFO: update-demo-nautilus-vq8v8 is verified up and running STEP: using delete to clean up resources Oct 14 14:08:19.266: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-736' Oct 14 14:08:20.631: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 14 14:08:20.631: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Oct 14 14:08:20.632: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-736' Oct 14 14:08:22.329: INFO: stderr: "No resources found in kubectl-736 namespace.\n" Oct 14 14:08:22.329: INFO: stdout: "" Oct 14 14:08:22.330: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-736 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Oct 14 14:08:23.602: INFO: stderr: "" Oct 14 14:08:23.602: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:08:23.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-736" for this suite. • [SLOW TEST:42.673 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:306 should scale a replication controller [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":303,"completed":87,"skipped":1244,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:08:23.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:08:23.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3680" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":303,"completed":88,"skipped":1291,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:08:23.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Oct 14 14:08:23.957: INFO: >>> kubeConfig: /root/.kube/config Oct 14 14:08:44.128: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:09:46.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1102" for this suite. • [SLOW TEST:82.281 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":303,"completed":89,"skipped":1334,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:09:46.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W1014 14:09:59.564386 11 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 14 14:11:01.589: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Oct 14 14:11:01.589: INFO: Deleting pod "simpletest-rc-to-be-deleted-2wjh6" in namespace "gc-4045" Oct 14 14:11:01.672: INFO: Deleting pod "simpletest-rc-to-be-deleted-7fkwg" in namespace "gc-4045" Oct 14 14:11:01.718: INFO: Deleting pod "simpletest-rc-to-be-deleted-bkjfd" in namespace "gc-4045" Oct 14 14:11:02.104: INFO: Deleting pod "simpletest-rc-to-be-deleted-c5rr6" in namespace "gc-4045" Oct 14 14:11:02.352: INFO: Deleting pod "simpletest-rc-to-be-deleted-dhg6n" in namespace "gc-4045" [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:11:02.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4045" for this suite. • [SLOW TEST:76.857 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":303,"completed":90,"skipped":1363,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:11:02.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-2089c9a7-fc08-46f3-bacb-67ce37242294 STEP: Creating a pod to test consume configMaps Oct 14 14:11:03.370: INFO: Waiting up to 5m0s for pod "pod-configmaps-2d6c1447-5a92-4d1c-8bfc-a957b6e59e73" in namespace "configmap-4748" to be "Succeeded or Failed" Oct 14 14:11:03.391: INFO: Pod "pod-configmaps-2d6c1447-5a92-4d1c-8bfc-a957b6e59e73": Phase="Pending", Reason="", readiness=false. Elapsed: 20.234604ms Oct 14 14:11:05.400: INFO: Pod "pod-configmaps-2d6c1447-5a92-4d1c-8bfc-a957b6e59e73": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029140914s Oct 14 14:11:07.408: INFO: Pod "pod-configmaps-2d6c1447-5a92-4d1c-8bfc-a957b6e59e73": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037349649s STEP: Saw pod success Oct 14 14:11:07.408: INFO: Pod "pod-configmaps-2d6c1447-5a92-4d1c-8bfc-a957b6e59e73" satisfied condition "Succeeded or Failed" Oct 14 14:11:07.412: INFO: Trying to get logs from node latest-worker pod pod-configmaps-2d6c1447-5a92-4d1c-8bfc-a957b6e59e73 container configmap-volume-test: STEP: delete the pod Oct 14 14:11:07.468: INFO: Waiting for pod pod-configmaps-2d6c1447-5a92-4d1c-8bfc-a957b6e59e73 to disappear Oct 14 14:11:07.483: INFO: Pod pod-configmaps-2d6c1447-5a92-4d1c-8bfc-a957b6e59e73 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:11:07.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4748" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":91,"skipped":1377,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:11:07.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Oct 14 14:11:14.851: INFO: 10 pods remaining Oct 14 14:11:14.851: INFO: 0 pods has nil DeletionTimestamp Oct 14 14:11:14.851: INFO: Oct 14 14:11:15.819: INFO: 0 pods remaining Oct 14 14:11:15.819: INFO: 0 pods has nil DeletionTimestamp Oct 14 14:11:15.819: INFO: STEP: Gathering metrics W1014 14:11:18.090061 11 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 14 14:12:20.722: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:12:20.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9385" for this suite. • [SLOW TEST:73.237 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":303,"completed":92,"skipped":1386,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:12:20.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Oct 14 14:12:30.918: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8026 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 14 14:12:30.919: INFO: >>> kubeConfig: /root/.kube/config I1014 14:12:31.039149 11 log.go:181] (0x811e7e0) (0x811e8c0) Create stream I1014 14:12:31.039457 11 log.go:181] (0x811e7e0) (0x811e8c0) Stream added, broadcasting: 1 I1014 14:12:31.044068 11 log.go:181] (0x811e7e0) Reply frame received for 1 I1014 14:12:31.044356 11 log.go:181] (0x811e7e0) (0x811ebd0) Create stream I1014 14:12:31.044504 11 log.go:181] (0x811e7e0) (0x811ebd0) Stream added, broadcasting: 3 I1014 14:12:31.046398 11 log.go:181] (0x811e7e0) Reply frame received for 3 I1014 14:12:31.046545 11 log.go:181] (0x811e7e0) (0x811ed90) Create stream I1014 14:12:31.046632 11 log.go:181] (0x811e7e0) (0x811ed90) Stream added, broadcasting: 5 I1014 14:12:31.048124 11 log.go:181] (0x811e7e0) Reply frame received for 5 I1014 14:12:31.130258 11 log.go:181] (0x811e7e0) Data frame received for 3 I1014 14:12:31.130477 11 log.go:181] (0x811ebd0) (3) Data frame handling I1014 14:12:31.130650 11 log.go:181] (0x811e7e0) Data frame received for 5 I1014 14:12:31.130855 11 log.go:181] (0x811ed90) (5) Data frame handling I1014 14:12:31.131010 11 log.go:181] (0x811ebd0) (3) Data frame sent I1014 14:12:31.131197 11 log.go:181] (0x811e7e0) Data frame received for 3 I1014 14:12:31.131362 11 log.go:181] (0x811ebd0) (3) Data frame handling I1014 14:12:31.131591 11 log.go:181] (0x811e7e0) Data frame received for 1 I1014 14:12:31.131766 11 log.go:181] (0x811e8c0) (1) Data frame handling I1014 14:12:31.131918 11 log.go:181] (0x811e8c0) (1) Data frame sent I1014 14:12:31.132112 11 log.go:181] (0x811e7e0) (0x811e8c0) Stream removed, broadcasting: 1 I1014 14:12:31.132331 11 log.go:181] (0x811e7e0) Go away received I1014 14:12:31.133303 11 log.go:181] (0x811e7e0) (0x811e8c0) Stream removed, broadcasting: 1 I1014 14:12:31.133511 11 log.go:181] (0x811e7e0) (0x811ebd0) Stream removed, broadcasting: 3 I1014 14:12:31.133676 11 log.go:181] (0x811e7e0) (0x811ed90) Stream removed, broadcasting: 5 Oct 14 14:12:31.133: INFO: Exec stderr: "" Oct 14 14:12:31.134: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8026 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 14 14:12:31.134: INFO: >>> kubeConfig: /root/.kube/config I1014 14:12:31.244000 11 log.go:181] (0xa3e0850) (0xa3e0930) Create stream I1014 14:12:31.244168 11 log.go:181] (0xa3e0850) (0xa3e0930) Stream added, broadcasting: 1 I1014 14:12:31.249085 11 log.go:181] (0xa3e0850) Reply frame received for 1 I1014 14:12:31.249396 11 log.go:181] (0xa3e0850) (0xa3e0e70) Create stream I1014 14:12:31.249561 11 log.go:181] (0xa3e0850) (0xa3e0e70) Stream added, broadcasting: 3 I1014 14:12:31.251619 11 log.go:181] (0xa3e0850) Reply frame received for 3 I1014 14:12:31.251755 11 log.go:181] (0xa3e0850) (0xa3e11f0) Create stream I1014 14:12:31.251895 11 log.go:181] (0xa3e0850) (0xa3e11f0) Stream added, broadcasting: 5 I1014 14:12:31.253736 11 log.go:181] (0xa3e0850) Reply frame received for 5 I1014 14:12:31.317708 11 log.go:181] (0xa3e0850) Data frame received for 5 I1014 14:12:31.317892 11 log.go:181] (0xa3e11f0) (5) Data frame handling I1014 14:12:31.318036 11 log.go:181] (0xa3e0850) Data frame received for 3 I1014 14:12:31.318163 11 log.go:181] (0xa3e0e70) (3) Data frame handling I1014 14:12:31.318292 11 log.go:181] (0xa3e0e70) (3) Data frame sent I1014 14:12:31.318393 11 log.go:181] (0xa3e0850) Data frame received for 3 I1014 14:12:31.318523 11 log.go:181] (0xa3e0e70) (3) Data frame handling I1014 14:12:31.318922 11 log.go:181] (0xa3e0850) Data frame received for 1 I1014 14:12:31.319070 11 log.go:181] (0xa3e0930) (1) Data frame handling I1014 14:12:31.319211 11 log.go:181] (0xa3e0930) (1) Data frame sent I1014 14:12:31.319322 11 log.go:181] (0xa3e0850) (0xa3e0930) Stream removed, broadcasting: 1 I1014 14:12:31.319462 11 log.go:181] (0xa3e0850) Go away received I1014 14:12:31.319798 11 log.go:181] (0xa3e0850) (0xa3e0930) Stream removed, broadcasting: 1 I1014 14:12:31.319959 11 log.go:181] (0xa3e0850) (0xa3e0e70) Stream removed, broadcasting: 3 I1014 14:12:31.320059 11 log.go:181] (0xa3e0850) (0xa3e11f0) Stream removed, broadcasting: 5 Oct 14 14:12:31.320: INFO: Exec stderr: "" Oct 14 14:12:31.320: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8026 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 14 14:12:31.320: INFO: >>> kubeConfig: /root/.kube/config I1014 14:12:31.420701 11 log.go:181] (0x8a80070) (0x8a811f0) Create stream I1014 14:12:31.421101 11 log.go:181] (0x8a80070) (0x8a811f0) Stream added, broadcasting: 1 I1014 14:12:31.426340 11 log.go:181] (0x8a80070) Reply frame received for 1 I1014 14:12:31.426516 11 log.go:181] (0x8a80070) (0xa3e19d0) Create stream I1014 14:12:31.426588 11 log.go:181] (0x8a80070) (0xa3e19d0) Stream added, broadcasting: 3 I1014 14:12:31.428230 11 log.go:181] (0x8a80070) Reply frame received for 3 I1014 14:12:31.428405 11 log.go:181] (0x8a80070) (0x8bc20e0) Create stream I1014 14:12:31.428480 11 log.go:181] (0x8a80070) (0x8bc20e0) Stream added, broadcasting: 5 I1014 14:12:31.429896 11 log.go:181] (0x8a80070) Reply frame received for 5 I1014 14:12:31.496597 11 log.go:181] (0x8a80070) Data frame received for 3 I1014 14:12:31.496756 11 log.go:181] (0xa3e19d0) (3) Data frame handling I1014 14:12:31.496965 11 log.go:181] (0xa3e19d0) (3) Data frame sent I1014 14:12:31.497078 11 log.go:181] (0x8a80070) Data frame received for 3 I1014 14:12:31.497156 11 log.go:181] (0x8a80070) Data frame received for 5 I1014 14:12:31.497320 11 log.go:181] (0x8bc20e0) (5) Data frame handling I1014 14:12:31.497704 11 log.go:181] (0xa3e19d0) (3) Data frame handling I1014 14:12:31.498020 11 log.go:181] (0x8a80070) Data frame received for 1 I1014 14:12:31.498112 11 log.go:181] (0x8a811f0) (1) Data frame handling I1014 14:12:31.498195 11 log.go:181] (0x8a811f0) (1) Data frame sent I1014 14:12:31.498278 11 log.go:181] (0x8a80070) (0x8a811f0) Stream removed, broadcasting: 1 I1014 14:12:31.498368 11 log.go:181] (0x8a80070) Go away received I1014 14:12:31.498637 11 log.go:181] (0x8a80070) (0x8a811f0) Stream removed, broadcasting: 1 I1014 14:12:31.498737 11 log.go:181] (0x8a80070) (0xa3e19d0) Stream removed, broadcasting: 3 I1014 14:12:31.498817 11 log.go:181] (0x8a80070) (0x8bc20e0) Stream removed, broadcasting: 5 Oct 14 14:12:31.498: INFO: Exec stderr: "" Oct 14 14:12:31.499: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8026 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 14 14:12:31.499: INFO: >>> kubeConfig: /root/.kube/config I1014 14:12:31.667418 11 log.go:181] (0x8b4cf50) (0x8b4d110) Create stream I1014 14:12:31.667603 11 log.go:181] (0x8b4cf50) (0x8b4d110) Stream added, broadcasting: 1 I1014 14:12:31.674156 11 log.go:181] (0x8b4cf50) Reply frame received for 1 I1014 14:12:31.674371 11 log.go:181] (0x8b4cf50) (0x8e52460) Create stream I1014 14:12:31.674461 11 log.go:181] (0x8b4cf50) (0x8e52460) Stream added, broadcasting: 3 I1014 14:12:31.676038 11 log.go:181] (0x8b4cf50) Reply frame received for 3 I1014 14:12:31.676226 11 log.go:181] (0x8b4cf50) (0xa5681c0) Create stream I1014 14:12:31.676297 11 log.go:181] (0x8b4cf50) (0xa5681c0) Stream added, broadcasting: 5 I1014 14:12:31.677745 11 log.go:181] (0x8b4cf50) Reply frame received for 5 I1014 14:12:31.740146 11 log.go:181] (0x8b4cf50) Data frame received for 5 I1014 14:12:31.740311 11 log.go:181] (0xa5681c0) (5) Data frame handling I1014 14:12:31.740444 11 log.go:181] (0x8b4cf50) Data frame received for 3 I1014 14:12:31.740503 11 log.go:181] (0x8e52460) (3) Data frame handling I1014 14:12:31.740613 11 log.go:181] (0x8e52460) (3) Data frame sent I1014 14:12:31.740708 11 log.go:181] (0x8b4cf50) Data frame received for 3 I1014 14:12:31.740798 11 log.go:181] (0x8e52460) (3) Data frame handling I1014 14:12:31.741800 11 log.go:181] (0x8b4cf50) Data frame received for 1 I1014 14:12:31.742061 11 log.go:181] (0x8b4d110) (1) Data frame handling I1014 14:12:31.742375 11 log.go:181] (0x8b4d110) (1) Data frame sent I1014 14:12:31.742581 11 log.go:181] (0x8b4cf50) (0x8b4d110) Stream removed, broadcasting: 1 I1014 14:12:31.742826 11 log.go:181] (0x8b4cf50) Go away received I1014 14:12:31.743369 11 log.go:181] (0x8b4cf50) (0x8b4d110) Stream removed, broadcasting: 1 I1014 14:12:31.743584 11 log.go:181] (0x8b4cf50) (0x8e52460) Stream removed, broadcasting: 3 I1014 14:12:31.743783 11 log.go:181] (0x8b4cf50) (0xa5681c0) Stream removed, broadcasting: 5 Oct 14 14:12:31.743: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Oct 14 14:12:31.744: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8026 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 14 14:12:31.744: INFO: >>> kubeConfig: /root/.kube/config I1014 14:12:31.850636 11 log.go:181] (0xa569110) (0xa569180) Create stream I1014 14:12:31.850767 11 log.go:181] (0xa569110) (0xa569180) Stream added, broadcasting: 1 I1014 14:12:31.854166 11 log.go:181] (0xa569110) Reply frame received for 1 I1014 14:12:31.854311 11 log.go:181] (0xa569110) (0xa569340) Create stream I1014 14:12:31.854381 11 log.go:181] (0xa569110) (0xa569340) Stream added, broadcasting: 3 I1014 14:12:31.855773 11 log.go:181] (0xa569110) Reply frame received for 3 I1014 14:12:31.855923 11 log.go:181] (0xa569110) (0x8b4d9d0) Create stream I1014 14:12:31.856005 11 log.go:181] (0xa569110) (0x8b4d9d0) Stream added, broadcasting: 5 I1014 14:12:31.857486 11 log.go:181] (0xa569110) Reply frame received for 5 I1014 14:12:31.925122 11 log.go:181] (0xa569110) Data frame received for 3 I1014 14:12:31.925345 11 log.go:181] (0xa569340) (3) Data frame handling I1014 14:12:31.925490 11 log.go:181] (0xa569110) Data frame received for 5 I1014 14:12:31.925651 11 log.go:181] (0x8b4d9d0) (5) Data frame handling I1014 14:12:31.925820 11 log.go:181] (0xa569340) (3) Data frame sent I1014 14:12:31.926072 11 log.go:181] (0xa569110) Data frame received for 3 I1014 14:12:31.926200 11 log.go:181] (0xa569340) (3) Data frame handling I1014 14:12:31.926896 11 log.go:181] (0xa569110) Data frame received for 1 I1014 14:12:31.927107 11 log.go:181] (0xa569180) (1) Data frame handling I1014 14:12:31.927303 11 log.go:181] (0xa569180) (1) Data frame sent I1014 14:12:31.927465 11 log.go:181] (0xa569110) (0xa569180) Stream removed, broadcasting: 1 I1014 14:12:31.927683 11 log.go:181] (0xa569110) Go away received I1014 14:12:31.927977 11 log.go:181] (0xa569110) (0xa569180) Stream removed, broadcasting: 1 I1014 14:12:31.928171 11 log.go:181] (0xa569110) (0xa569340) Stream removed, broadcasting: 3 I1014 14:12:31.928312 11 log.go:181] (0xa569110) (0x8b4d9d0) Stream removed, broadcasting: 5 Oct 14 14:12:31.928: INFO: Exec stderr: "" Oct 14 14:12:31.928: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8026 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 14 14:12:31.928: INFO: >>> kubeConfig: /root/.kube/config I1014 14:12:32.036392 11 log.go:181] (0x81b05b0) (0x81b0620) Create stream I1014 14:12:32.036548 11 log.go:181] (0x81b05b0) (0x81b0620) Stream added, broadcasting: 1 I1014 14:12:32.040200 11 log.go:181] (0x81b05b0) Reply frame received for 1 I1014 14:12:32.040332 11 log.go:181] (0x81b05b0) (0x81b0850) Create stream I1014 14:12:32.040395 11 log.go:181] (0x81b05b0) (0x81b0850) Stream added, broadcasting: 3 I1014 14:12:32.041880 11 log.go:181] (0x81b05b0) Reply frame received for 3 I1014 14:12:32.042070 11 log.go:181] (0x81b05b0) (0x811f490) Create stream I1014 14:12:32.042156 11 log.go:181] (0x81b05b0) (0x811f490) Stream added, broadcasting: 5 I1014 14:12:32.043602 11 log.go:181] (0x81b05b0) Reply frame received for 5 I1014 14:12:32.096951 11 log.go:181] (0x81b05b0) Data frame received for 3 I1014 14:12:32.097133 11 log.go:181] (0x81b0850) (3) Data frame handling I1014 14:12:32.097308 11 log.go:181] (0x81b05b0) Data frame received for 5 I1014 14:12:32.097527 11 log.go:181] (0x811f490) (5) Data frame handling I1014 14:12:32.097628 11 log.go:181] (0x81b0850) (3) Data frame sent I1014 14:12:32.097754 11 log.go:181] (0x81b05b0) Data frame received for 3 I1014 14:12:32.097868 11 log.go:181] (0x81b0850) (3) Data frame handling I1014 14:12:32.098089 11 log.go:181] (0x81b05b0) Data frame received for 1 I1014 14:12:32.098222 11 log.go:181] (0x81b0620) (1) Data frame handling I1014 14:12:32.098344 11 log.go:181] (0x81b0620) (1) Data frame sent I1014 14:12:32.098440 11 log.go:181] (0x81b05b0) (0x81b0620) Stream removed, broadcasting: 1 I1014 14:12:32.098557 11 log.go:181] (0x81b05b0) Go away received I1014 14:12:32.098905 11 log.go:181] (0x81b05b0) (0x81b0620) Stream removed, broadcasting: 1 I1014 14:12:32.098999 11 log.go:181] (0x81b05b0) (0x81b0850) Stream removed, broadcasting: 3 I1014 14:12:32.099071 11 log.go:181] (0x81b05b0) (0x811f490) Stream removed, broadcasting: 5 Oct 14 14:12:32.099: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Oct 14 14:12:32.099: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8026 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 14 14:12:32.099: INFO: >>> kubeConfig: /root/.kube/config I1014 14:12:32.204828 11 log.go:181] (0x7840e70) (0x7841030) Create stream I1014 14:12:32.205054 11 log.go:181] (0x7840e70) (0x7841030) Stream added, broadcasting: 1 I1014 14:12:32.209196 11 log.go:181] (0x7840e70) Reply frame received for 1 I1014 14:12:32.209382 11 log.go:181] (0x7840e70) (0x7841730) Create stream I1014 14:12:32.209486 11 log.go:181] (0x7840e70) (0x7841730) Stream added, broadcasting: 3 I1014 14:12:32.211043 11 log.go:181] (0x7840e70) Reply frame received for 3 I1014 14:12:32.211213 11 log.go:181] (0x7840e70) (0xa0d4000) Create stream I1014 14:12:32.211298 11 log.go:181] (0x7840e70) (0xa0d4000) Stream added, broadcasting: 5 I1014 14:12:32.212597 11 log.go:181] (0x7840e70) Reply frame received for 5 I1014 14:12:32.292306 11 log.go:181] (0x7840e70) Data frame received for 5 I1014 14:12:32.292529 11 log.go:181] (0xa0d4000) (5) Data frame handling I1014 14:12:32.292697 11 log.go:181] (0x7840e70) Data frame received for 3 I1014 14:12:32.292830 11 log.go:181] (0x7841730) (3) Data frame handling I1014 14:12:32.293048 11 log.go:181] (0x7841730) (3) Data frame sent I1014 14:12:32.293179 11 log.go:181] (0x7840e70) Data frame received for 3 I1014 14:12:32.293313 11 log.go:181] (0x7841730) (3) Data frame handling I1014 14:12:32.294877 11 log.go:181] (0x7840e70) Data frame received for 1 I1014 14:12:32.295025 11 log.go:181] (0x7841030) (1) Data frame handling I1014 14:12:32.295181 11 log.go:181] (0x7841030) (1) Data frame sent I1014 14:12:32.295320 11 log.go:181] (0x7840e70) (0x7841030) Stream removed, broadcasting: 1 I1014 14:12:32.295486 11 log.go:181] (0x7840e70) Go away received I1014 14:12:32.295982 11 log.go:181] (0x7840e70) (0x7841030) Stream removed, broadcasting: 1 I1014 14:12:32.296190 11 log.go:181] (0x7840e70) (0x7841730) Stream removed, broadcasting: 3 I1014 14:12:32.296315 11 log.go:181] (0x7840e70) (0xa0d4000) Stream removed, broadcasting: 5 Oct 14 14:12:32.296: INFO: Exec stderr: "" Oct 14 14:12:32.296: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8026 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 14 14:12:32.296: INFO: >>> kubeConfig: /root/.kube/config I1014 14:12:32.403542 11 log.go:181] (0x81b0ee0) (0x81b0f50) Create stream I1014 14:12:32.403657 11 log.go:181] (0x81b0ee0) (0x81b0f50) Stream added, broadcasting: 1 I1014 14:12:32.408660 11 log.go:181] (0x81b0ee0) Reply frame received for 1 I1014 14:12:32.408955 11 log.go:181] (0x81b0ee0) (0x811f730) Create stream I1014 14:12:32.409058 11 log.go:181] (0x81b0ee0) (0x811f730) Stream added, broadcasting: 3 I1014 14:12:32.410786 11 log.go:181] (0x81b0ee0) Reply frame received for 3 I1014 14:12:32.410893 11 log.go:181] (0x81b0ee0) (0x811f8f0) Create stream I1014 14:12:32.410963 11 log.go:181] (0x81b0ee0) (0x811f8f0) Stream added, broadcasting: 5 I1014 14:12:32.412133 11 log.go:181] (0x81b0ee0) Reply frame received for 5 I1014 14:12:32.483342 11 log.go:181] (0x81b0ee0) Data frame received for 5 I1014 14:12:32.483592 11 log.go:181] (0x811f8f0) (5) Data frame handling I1014 14:12:32.483792 11 log.go:181] (0x81b0ee0) Data frame received for 3 I1014 14:12:32.484009 11 log.go:181] (0x811f730) (3) Data frame handling I1014 14:12:32.484206 11 log.go:181] (0x811f730) (3) Data frame sent I1014 14:12:32.484352 11 log.go:181] (0x81b0ee0) Data frame received for 3 I1014 14:12:32.484491 11 log.go:181] (0x811f730) (3) Data frame handling I1014 14:12:32.484631 11 log.go:181] (0x81b0ee0) Data frame received for 1 I1014 14:12:32.484717 11 log.go:181] (0x81b0f50) (1) Data frame handling I1014 14:12:32.484796 11 log.go:181] (0x81b0f50) (1) Data frame sent I1014 14:12:32.484942 11 log.go:181] (0x81b0ee0) (0x81b0f50) Stream removed, broadcasting: 1 I1014 14:12:32.485056 11 log.go:181] (0x81b0ee0) Go away received I1014 14:12:32.485549 11 log.go:181] (0x81b0ee0) (0x81b0f50) Stream removed, broadcasting: 1 I1014 14:12:32.485696 11 log.go:181] (0x81b0ee0) (0x811f730) Stream removed, broadcasting: 3 I1014 14:12:32.485818 11 log.go:181] (0x81b0ee0) (0x811f8f0) Stream removed, broadcasting: 5 Oct 14 14:12:32.485: INFO: Exec stderr: "" Oct 14 14:12:32.486: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8026 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 14 14:12:32.486: INFO: >>> kubeConfig: /root/.kube/config I1014 14:12:32.592285 11 log.go:181] (0x8108150) (0x81081c0) Create stream I1014 14:12:32.592431 11 log.go:181] (0x8108150) (0x81081c0) Stream added, broadcasting: 1 I1014 14:12:32.595897 11 log.go:181] (0x8108150) Reply frame received for 1 I1014 14:12:32.596068 11 log.go:181] (0x8108150) (0x81083f0) Create stream I1014 14:12:32.596142 11 log.go:181] (0x8108150) (0x81083f0) Stream added, broadcasting: 3 I1014 14:12:32.597803 11 log.go:181] (0x8108150) Reply frame received for 3 I1014 14:12:32.597960 11 log.go:181] (0x8108150) (0x81085b0) Create stream I1014 14:12:32.598073 11 log.go:181] (0x8108150) (0x81085b0) Stream added, broadcasting: 5 I1014 14:12:32.599591 11 log.go:181] (0x8108150) Reply frame received for 5 I1014 14:12:32.688721 11 log.go:181] (0x8108150) Data frame received for 3 I1014 14:12:32.688896 11 log.go:181] (0x81083f0) (3) Data frame handling I1014 14:12:32.689025 11 log.go:181] (0x81083f0) (3) Data frame sent I1014 14:12:32.689110 11 log.go:181] (0x8108150) Data frame received for 3 I1014 14:12:32.689200 11 log.go:181] (0x81083f0) (3) Data frame handling I1014 14:12:32.689745 11 log.go:181] (0x8108150) Data frame received for 5 I1014 14:12:32.689873 11 log.go:181] (0x81085b0) (5) Data frame handling I1014 14:12:32.691915 11 log.go:181] (0x8108150) Data frame received for 1 I1014 14:12:32.692006 11 log.go:181] (0x81081c0) (1) Data frame handling I1014 14:12:32.692092 11 log.go:181] (0x81081c0) (1) Data frame sent I1014 14:12:32.692186 11 log.go:181] (0x8108150) (0x81081c0) Stream removed, broadcasting: 1 I1014 14:12:32.692291 11 log.go:181] (0x8108150) Go away received I1014 14:12:32.693198 11 log.go:181] (0x8108150) (0x81081c0) Stream removed, broadcasting: 1 I1014 14:12:32.693401 11 log.go:181] (0x8108150) (0x81083f0) Stream removed, broadcasting: 3 I1014 14:12:32.693524 11 log.go:181] (0x8108150) (0x81085b0) Stream removed, broadcasting: 5 Oct 14 14:12:32.693: INFO: Exec stderr: "" Oct 14 14:12:32.693: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8026 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 14 14:12:32.693: INFO: >>> kubeConfig: /root/.kube/config I1014 14:12:32.795093 11 log.go:181] (0xa0d4620) (0xa0d4690) Create stream I1014 14:12:32.795202 11 log.go:181] (0xa0d4620) (0xa0d4690) Stream added, broadcasting: 1 I1014 14:12:32.798482 11 log.go:181] (0xa0d4620) Reply frame received for 1 I1014 14:12:32.798594 11 log.go:181] (0xa0d4620) (0x81b13b0) Create stream I1014 14:12:32.798644 11 log.go:181] (0xa0d4620) (0x81b13b0) Stream added, broadcasting: 3 I1014 14:12:32.799767 11 log.go:181] (0xa0d4620) Reply frame received for 3 I1014 14:12:32.799864 11 log.go:181] (0xa0d4620) (0xa0d4850) Create stream I1014 14:12:32.799912 11 log.go:181] (0xa0d4620) (0xa0d4850) Stream added, broadcasting: 5 I1014 14:12:32.801026 11 log.go:181] (0xa0d4620) Reply frame received for 5 I1014 14:12:32.866255 11 log.go:181] (0xa0d4620) Data frame received for 3 I1014 14:12:32.866460 11 log.go:181] (0x81b13b0) (3) Data frame handling I1014 14:12:32.866589 11 log.go:181] (0xa0d4620) Data frame received for 5 I1014 14:12:32.866756 11 log.go:181] (0xa0d4850) (5) Data frame handling I1014 14:12:32.866866 11 log.go:181] (0x81b13b0) (3) Data frame sent I1014 14:12:32.866983 11 log.go:181] (0xa0d4620) Data frame received for 3 I1014 14:12:32.867080 11 log.go:181] (0xa0d4620) Data frame received for 1 I1014 14:12:32.867175 11 log.go:181] (0xa0d4690) (1) Data frame handling I1014 14:12:32.867244 11 log.go:181] (0x81b13b0) (3) Data frame handling I1014 14:12:32.867337 11 log.go:181] (0xa0d4690) (1) Data frame sent I1014 14:12:32.867408 11 log.go:181] (0xa0d4620) (0xa0d4690) Stream removed, broadcasting: 1 I1014 14:12:32.867485 11 log.go:181] (0xa0d4620) Go away received I1014 14:12:32.867840 11 log.go:181] (0xa0d4620) (0xa0d4690) Stream removed, broadcasting: 1 I1014 14:12:32.867942 11 log.go:181] (0xa0d4620) (0x81b13b0) Stream removed, broadcasting: 3 I1014 14:12:32.868049 11 log.go:181] (0xa0d4620) (0xa0d4850) Stream removed, broadcasting: 5 Oct 14 14:12:32.868: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:12:32.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-8026" for this suite. • [SLOW TEST:12.142 seconds] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":93,"skipped":1407,"failed":0} SSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:12:32.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6492.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6492.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 14 14:12:39.056: INFO: DNS probes using dns-6492/dns-test-fa72627f-2d40-4868-8fe1-e5093364b56e succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:12:39.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6492" for this suite. • [SLOW TEST:6.304 seconds] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":303,"completed":94,"skipped":1411,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:12:39.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 14:12:39.672: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Oct 14 14:12:59.783: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2560 create -f -' Oct 14 14:13:05.862: INFO: stderr: "" Oct 14 14:13:05.863: INFO: stdout: "e2e-test-crd-publish-openapi-8307-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Oct 14 14:13:05.863: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2560 delete e2e-test-crd-publish-openapi-8307-crds test-foo' Oct 14 14:13:07.105: INFO: stderr: "" Oct 14 14:13:07.105: INFO: stdout: "e2e-test-crd-publish-openapi-8307-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Oct 14 14:13:07.106: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2560 apply -f -' Oct 14 14:13:09.939: INFO: stderr: "" Oct 14 14:13:09.939: INFO: stdout: "e2e-test-crd-publish-openapi-8307-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Oct 14 14:13:09.940: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2560 delete e2e-test-crd-publish-openapi-8307-crds test-foo' Oct 14 14:13:11.165: INFO: stderr: "" Oct 14 14:13:11.166: INFO: stdout: "e2e-test-crd-publish-openapi-8307-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Oct 14 14:13:11.167: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2560 create -f -' Oct 14 14:13:13.757: INFO: rc: 1 Oct 14 14:13:13.758: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2560 apply -f -' Oct 14 14:13:16.074: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Oct 14 14:13:16.075: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2560 create -f -' Oct 14 14:13:18.804: INFO: rc: 1 Oct 14 14:13:18.805: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2560 apply -f -' Oct 14 14:13:21.226: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Oct 14 14:13:21.226: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8307-crds' Oct 14 14:13:24.213: INFO: stderr: "" Oct 14 14:13:24.213: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8307-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Oct 14 14:13:24.218: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8307-crds.metadata' Oct 14 14:13:26.769: INFO: stderr: "" Oct 14 14:13:26.770: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8307-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Oct 14 14:13:26.775: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8307-crds.spec' Oct 14 14:13:29.617: INFO: stderr: "" Oct 14 14:13:29.617: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8307-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Oct 14 14:13:29.618: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8307-crds.spec.bars' Oct 14 14:13:32.760: INFO: stderr: "" Oct 14 14:13:32.760: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8307-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Oct 14 14:13:32.763: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8307-crds.spec.bars2' Oct 14 14:13:35.261: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:13:45.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2560" for this suite. • [SLOW TEST:66.622 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":303,"completed":95,"skipped":1417,"failed":0} SSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:13:45.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 14:13:50.065: INFO: Waiting up to 5m0s for pod "client-envvars-91cbc4c6-92ab-4fef-8fe4-2f0b83557696" in namespace "pods-5787" to be "Succeeded or Failed" Oct 14 14:13:50.112: INFO: Pod "client-envvars-91cbc4c6-92ab-4fef-8fe4-2f0b83557696": Phase="Pending", Reason="", readiness=false. Elapsed: 46.913965ms Oct 14 14:13:52.120: INFO: Pod "client-envvars-91cbc4c6-92ab-4fef-8fe4-2f0b83557696": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054571189s Oct 14 14:13:54.128: INFO: Pod "client-envvars-91cbc4c6-92ab-4fef-8fe4-2f0b83557696": Phase="Running", Reason="", readiness=true. Elapsed: 4.06243799s Oct 14 14:13:56.136: INFO: Pod "client-envvars-91cbc4c6-92ab-4fef-8fe4-2f0b83557696": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.070873062s STEP: Saw pod success Oct 14 14:13:56.137: INFO: Pod "client-envvars-91cbc4c6-92ab-4fef-8fe4-2f0b83557696" satisfied condition "Succeeded or Failed" Oct 14 14:13:56.142: INFO: Trying to get logs from node latest-worker pod client-envvars-91cbc4c6-92ab-4fef-8fe4-2f0b83557696 container env3cont: STEP: delete the pod Oct 14 14:13:56.213: INFO: Waiting for pod client-envvars-91cbc4c6-92ab-4fef-8fe4-2f0b83557696 to disappear Oct 14 14:13:56.220: INFO: Pod client-envvars-91cbc4c6-92ab-4fef-8fe4-2f0b83557696 no longer exists [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:13:56.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5787" for this suite. • [SLOW TEST:10.421 seconds] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":303,"completed":96,"skipped":1422,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:13:56.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-651410b4-ffa6-4bd5-9cbb-1a138dc896e5 in namespace container-probe-5803 Oct 14 14:14:00.414: INFO: Started pod liveness-651410b4-ffa6-4bd5-9cbb-1a138dc896e5 in namespace container-probe-5803 STEP: checking the pod's current state and verifying that restartCount is present Oct 14 14:14:00.419: INFO: Initial restart count of pod liveness-651410b4-ffa6-4bd5-9cbb-1a138dc896e5 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:18:01.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5803" for this suite. • [SLOW TEST:245.380 seconds] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":303,"completed":97,"skipped":1445,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:18:01.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Oct 14 14:18:01.690: INFO: Waiting up to 5m0s for pod "downward-api-ae60d537-5a8a-44a5-80a2-d3f74a14a248" in namespace "downward-api-4393" to be "Succeeded or Failed" Oct 14 14:18:01.987: INFO: Pod "downward-api-ae60d537-5a8a-44a5-80a2-d3f74a14a248": Phase="Pending", Reason="", readiness=false. Elapsed: 297.030939ms Oct 14 14:18:03.995: INFO: Pod "downward-api-ae60d537-5a8a-44a5-80a2-d3f74a14a248": Phase="Pending", Reason="", readiness=false. Elapsed: 2.305370847s Oct 14 14:18:06.167: INFO: Pod "downward-api-ae60d537-5a8a-44a5-80a2-d3f74a14a248": Phase="Running", Reason="", readiness=true. Elapsed: 4.476801691s Oct 14 14:18:08.175: INFO: Pod "downward-api-ae60d537-5a8a-44a5-80a2-d3f74a14a248": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.485336582s STEP: Saw pod success Oct 14 14:18:08.175: INFO: Pod "downward-api-ae60d537-5a8a-44a5-80a2-d3f74a14a248" satisfied condition "Succeeded or Failed" Oct 14 14:18:08.182: INFO: Trying to get logs from node latest-worker pod downward-api-ae60d537-5a8a-44a5-80a2-d3f74a14a248 container dapi-container: STEP: delete the pod Oct 14 14:18:08.232: INFO: Waiting for pod downward-api-ae60d537-5a8a-44a5-80a2-d3f74a14a248 to disappear Oct 14 14:18:08.240: INFO: Pod downward-api-ae60d537-5a8a-44a5-80a2-d3f74a14a248 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:18:08.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4393" for this suite. • [SLOW TEST:6.637 seconds] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":303,"completed":98,"skipped":1457,"failed":0} SSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:18:08.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-931bb05e-1c2f-4189-8cb0-5867d979420b in namespace container-probe-451 Oct 14 14:18:12.401: INFO: Started pod liveness-931bb05e-1c2f-4189-8cb0-5867d979420b in namespace container-probe-451 STEP: checking the pod's current state and verifying that restartCount is present Oct 14 14:18:12.406: INFO: Initial restart count of pod liveness-931bb05e-1c2f-4189-8cb0-5867d979420b is 0 Oct 14 14:18:26.469: INFO: Restart count of pod container-probe-451/liveness-931bb05e-1c2f-4189-8cb0-5867d979420b is now 1 (14.06334855s elapsed) Oct 14 14:18:46.552: INFO: Restart count of pod container-probe-451/liveness-931bb05e-1c2f-4189-8cb0-5867d979420b is now 2 (34.146244499s elapsed) Oct 14 14:19:06.637: INFO: Restart count of pod container-probe-451/liveness-931bb05e-1c2f-4189-8cb0-5867d979420b is now 3 (54.231183459s elapsed) Oct 14 14:19:26.919: INFO: Restart count of pod container-probe-451/liveness-931bb05e-1c2f-4189-8cb0-5867d979420b is now 4 (1m14.513416783s elapsed) Oct 14 14:20:29.188: INFO: Restart count of pod container-probe-451/liveness-931bb05e-1c2f-4189-8cb0-5867d979420b is now 5 (2m16.782490765s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:20:29.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-451" for this suite. • [SLOW TEST:141.034 seconds] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":303,"completed":99,"skipped":1460,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:20:29.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Oct 14 14:20:29.711: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:20:45.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4796" for this suite. • [SLOW TEST:16.491 seconds] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":303,"completed":100,"skipped":1471,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:20:45.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs Oct 14 14:20:45.952: INFO: Waiting up to 5m0s for pod "pod-22593786-9fd8-4262-b390-35faf09fe972" in namespace "emptydir-9180" to be "Succeeded or Failed" Oct 14 14:20:45.985: INFO: Pod "pod-22593786-9fd8-4262-b390-35faf09fe972": Phase="Pending", Reason="", readiness=false. Elapsed: 32.931522ms Oct 14 14:20:47.993: INFO: Pod "pod-22593786-9fd8-4262-b390-35faf09fe972": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040580882s Oct 14 14:20:50.019: INFO: Pod "pod-22593786-9fd8-4262-b390-35faf09fe972": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.066838159s STEP: Saw pod success Oct 14 14:20:50.019: INFO: Pod "pod-22593786-9fd8-4262-b390-35faf09fe972" satisfied condition "Succeeded or Failed" Oct 14 14:20:50.025: INFO: Trying to get logs from node latest-worker pod pod-22593786-9fd8-4262-b390-35faf09fe972 container test-container: STEP: delete the pod Oct 14 14:20:50.082: INFO: Waiting for pod pod-22593786-9fd8-4262-b390-35faf09fe972 to disappear Oct 14 14:20:50.087: INFO: Pod pod-22593786-9fd8-4262-b390-35faf09fe972 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:20:50.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9180" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":101,"skipped":1473,"failed":0} SS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:20:50.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-866.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-866.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-866.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-866.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-866.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-866.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 14 14:20:56.544: INFO: DNS probes using dns-866/dns-test-b21090f1-a42d-49ed-9bce-44129686b2cb succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:20:56.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-866" for this suite. • [SLOW TEST:6.631 seconds] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":303,"completed":102,"skipped":1475,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:20:56.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-d83cfb75-36fe-4415-b787-9f5eabf6c5fc STEP: Creating the pod STEP: Updating configmap configmap-test-upd-d83cfb75-36fe-4415-b787-9f5eabf6c5fc STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:21:03.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1619" for this suite. • [SLOW TEST:6.676 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":103,"skipped":1502,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:21:03.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 14:21:03.489: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:21:04.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6480" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":303,"completed":104,"skipped":1517,"failed":0} SSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:21:04.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating secret secrets-4236/secret-test-dbdbe773-8b22-440e-bda2-d7ca15bd7b7f STEP: Creating a pod to test consume secrets Oct 14 14:21:04.679: INFO: Waiting up to 5m0s for pod "pod-configmaps-f7008827-d3c6-4ee5-aabf-58e26630dd0c" in namespace "secrets-4236" to be "Succeeded or Failed" Oct 14 14:21:04.703: INFO: Pod "pod-configmaps-f7008827-d3c6-4ee5-aabf-58e26630dd0c": Phase="Pending", Reason="", readiness=false. Elapsed: 23.717068ms Oct 14 14:21:06.710: INFO: Pod "pod-configmaps-f7008827-d3c6-4ee5-aabf-58e26630dd0c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030529929s Oct 14 14:21:08.774: INFO: Pod "pod-configmaps-f7008827-d3c6-4ee5-aabf-58e26630dd0c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.093923759s STEP: Saw pod success Oct 14 14:21:08.774: INFO: Pod "pod-configmaps-f7008827-d3c6-4ee5-aabf-58e26630dd0c" satisfied condition "Succeeded or Failed" Oct 14 14:21:08.813: INFO: Trying to get logs from node latest-worker pod pod-configmaps-f7008827-d3c6-4ee5-aabf-58e26630dd0c container env-test: STEP: delete the pod Oct 14 14:21:08.850: INFO: Waiting for pod pod-configmaps-f7008827-d3c6-4ee5-aabf-58e26630dd0c to disappear Oct 14 14:21:08.854: INFO: Pod pod-configmaps-f7008827-d3c6-4ee5-aabf-58e26630dd0c no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:21:08.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4236" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":303,"completed":105,"skipped":1520,"failed":0} SSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:21:08.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service endpoint-test2 in namespace services-3835 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3835 to expose endpoints map[] Oct 14 14:21:09.287: INFO: successfully validated that service endpoint-test2 in namespace services-3835 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-3835 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3835 to expose endpoints map[pod1:[80]] Oct 14 14:21:13.428: INFO: successfully validated that service endpoint-test2 in namespace services-3835 exposes endpoints map[pod1:[80]] STEP: Creating pod pod2 in namespace services-3835 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3835 to expose endpoints map[pod1:[80] pod2:[80]] Oct 14 14:21:17.605: INFO: successfully validated that service endpoint-test2 in namespace services-3835 exposes endpoints map[pod1:[80] pod2:[80]] STEP: Deleting pod pod1 in namespace services-3835 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3835 to expose endpoints map[pod2:[80]] Oct 14 14:21:17.687: INFO: successfully validated that service endpoint-test2 in namespace services-3835 exposes endpoints map[pod2:[80]] STEP: Deleting pod pod2 in namespace services-3835 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3835 to expose endpoints map[] Oct 14 14:21:18.282: INFO: successfully validated that service endpoint-test2 in namespace services-3835 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:21:18.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3835" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:9.584 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":303,"completed":106,"skipped":1525,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:21:18.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 14 14:21:28.723: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 14 14:21:31.207: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738282088, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738282088, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738282088, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738282088, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 14 14:21:34.263: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:21:34.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4684" for this suite. STEP: Destroying namespace "webhook-4684-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:16.187 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":303,"completed":107,"skipped":1527,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:21:34.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should release no longer matching pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Oct 14 14:21:34.723: INFO: Pod name pod-release: Found 0 pods out of 1 Oct 14 14:21:39.749: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:21:39.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1591" for this suite. • [SLOW TEST:5.268 seconds] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":303,"completed":108,"skipped":1537,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:21:39.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 14 14:21:48.716: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 14 14:21:50.733: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738282108, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738282108, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738282108, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738282108, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 14 14:21:53.798: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:22:04.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4888" for this suite. STEP: Destroying namespace "webhook-4888-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:24.347 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":303,"completed":109,"skipped":1563,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:22:04.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 14:22:04.389: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Oct 14 14:22:24.385: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-860 create -f -' Oct 14 14:22:29.929: INFO: stderr: "" Oct 14 14:22:29.930: INFO: stdout: "e2e-test-crd-publish-openapi-1289-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Oct 14 14:22:29.930: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-860 delete e2e-test-crd-publish-openapi-1289-crds test-cr' Oct 14 14:22:31.233: INFO: stderr: "" Oct 14 14:22:31.233: INFO: stdout: "e2e-test-crd-publish-openapi-1289-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Oct 14 14:22:31.233: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-860 apply -f -' Oct 14 14:22:34.036: INFO: stderr: "" Oct 14 14:22:34.036: INFO: stdout: "e2e-test-crd-publish-openapi-1289-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Oct 14 14:22:34.036: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-860 delete e2e-test-crd-publish-openapi-1289-crds test-cr' Oct 14 14:22:35.437: INFO: stderr: "" Oct 14 14:22:35.437: INFO: stdout: "e2e-test-crd-publish-openapi-1289-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Oct 14 14:22:35.437: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1289-crds' Oct 14 14:22:38.726: INFO: stderr: "" Oct 14 14:22:38.726: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1289-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:22:49.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-860" for this suite. • [SLOW TEST:44.955 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":303,"completed":110,"skipped":1565,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:22:49.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Oct 14 14:22:49.516: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 14:22:49.526: INFO: Number of nodes with available pods: 0 Oct 14 14:22:49.526: INFO: Node latest-worker is running more than one daemon pod Oct 14 14:22:50.539: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 14:22:50.547: INFO: Number of nodes with available pods: 0 Oct 14 14:22:50.547: INFO: Node latest-worker is running more than one daemon pod Oct 14 14:22:51.963: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 14:22:52.159: INFO: Number of nodes with available pods: 0 Oct 14 14:22:52.159: INFO: Node latest-worker is running more than one daemon pod Oct 14 14:22:52.660: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 14:22:52.667: INFO: Number of nodes with available pods: 0 Oct 14 14:22:52.667: INFO: Node latest-worker is running more than one daemon pod Oct 14 14:22:53.588: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 14:22:53.601: INFO: Number of nodes with available pods: 0 Oct 14 14:22:53.601: INFO: Node latest-worker is running more than one daemon pod Oct 14 14:22:54.539: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 14:22:54.546: INFO: Number of nodes with available pods: 2 Oct 14 14:22:54.546: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Oct 14 14:22:54.581: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 14:22:54.587: INFO: Number of nodes with available pods: 1 Oct 14 14:22:54.587: INFO: Node latest-worker2 is running more than one daemon pod Oct 14 14:22:55.601: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 14:22:55.609: INFO: Number of nodes with available pods: 1 Oct 14 14:22:55.609: INFO: Node latest-worker2 is running more than one daemon pod Oct 14 14:22:56.611: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 14:22:56.691: INFO: Number of nodes with available pods: 1 Oct 14 14:22:56.691: INFO: Node latest-worker2 is running more than one daemon pod Oct 14 14:22:57.601: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 14:22:57.608: INFO: Number of nodes with available pods: 1 Oct 14 14:22:57.608: INFO: Node latest-worker2 is running more than one daemon pod Oct 14 14:22:58.600: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 14:22:58.609: INFO: Number of nodes with available pods: 1 Oct 14 14:22:58.609: INFO: Node latest-worker2 is running more than one daemon pod Oct 14 14:22:59.600: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 14:22:59.606: INFO: Number of nodes with available pods: 1 Oct 14 14:22:59.607: INFO: Node latest-worker2 is running more than one daemon pod Oct 14 14:23:00.601: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 14:23:00.607: INFO: Number of nodes with available pods: 1 Oct 14 14:23:00.607: INFO: Node latest-worker2 is running more than one daemon pod Oct 14 14:23:01.599: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 14:23:01.606: INFO: Number of nodes with available pods: 1 Oct 14 14:23:01.606: INFO: Node latest-worker2 is running more than one daemon pod Oct 14 14:23:02.598: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 14:23:02.605: INFO: Number of nodes with available pods: 1 Oct 14 14:23:02.605: INFO: Node latest-worker2 is running more than one daemon pod Oct 14 14:23:03.601: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 14:23:03.609: INFO: Number of nodes with available pods: 1 Oct 14 14:23:03.609: INFO: Node latest-worker2 is running more than one daemon pod Oct 14 14:23:04.599: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 14:23:04.606: INFO: Number of nodes with available pods: 1 Oct 14 14:23:04.606: INFO: Node latest-worker2 is running more than one daemon pod Oct 14 14:23:05.598: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 14:23:05.605: INFO: Number of nodes with available pods: 1 Oct 14 14:23:05.605: INFO: Node latest-worker2 is running more than one daemon pod Oct 14 14:23:06.600: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 14:23:06.607: INFO: Number of nodes with available pods: 1 Oct 14 14:23:06.607: INFO: Node latest-worker2 is running more than one daemon pod Oct 14 14:23:07.630: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 14:23:07.692: INFO: Number of nodes with available pods: 1 Oct 14 14:23:07.692: INFO: Node latest-worker2 is running more than one daemon pod Oct 14 14:23:08.598: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 14:23:08.604: INFO: Number of nodes with available pods: 1 Oct 14 14:23:08.604: INFO: Node latest-worker2 is running more than one daemon pod Oct 14 14:23:09.598: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 14:23:09.605: INFO: Number of nodes with available pods: 2 Oct 14 14:23:09.605: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7497, will wait for the garbage collector to delete the pods Oct 14 14:23:09.715: INFO: Deleting DaemonSet.extensions daemon-set took: 50.26333ms Oct 14 14:23:10.115: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.897119ms Oct 14 14:23:15.827: INFO: Number of nodes with available pods: 0 Oct 14 14:23:15.827: INFO: Number of running nodes: 0, number of available pods: 0 Oct 14 14:23:15.831: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7497/daemonsets","resourceVersion":"1140401"},"items":null} Oct 14 14:23:15.833: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7497/pods","resourceVersion":"1140401"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:23:15.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7497" for this suite. • [SLOW TEST:26.665 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":303,"completed":111,"skipped":1575,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:23:15.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Oct 14 14:23:15.950: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the sample API server. Oct 14 14:23:30.874: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Oct 14 14:23:33.197: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738282210, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738282210, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738282210, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738282210, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67dc674868\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 14 14:23:35.207: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738282210, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738282210, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738282210, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738282210, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67dc674868\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 14 14:23:37.958: INFO: Waited 734.045601ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:23:38.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-5543" for this suite. • [SLOW TEST:22.710 seconds] [sig-api-machinery] Aggregator /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":303,"completed":112,"skipped":1612,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:23:38.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-8757c2ea-6aa5-4f08-a01e-5668c0f5ef7a STEP: Creating a pod to test consume secrets Oct 14 14:23:39.030: INFO: Waiting up to 5m0s for pod "pod-secrets-01e862d7-2174-4bdd-9907-df8f76dc1196" in namespace "secrets-3453" to be "Succeeded or Failed" Oct 14 14:23:39.064: INFO: Pod "pod-secrets-01e862d7-2174-4bdd-9907-df8f76dc1196": Phase="Pending", Reason="", readiness=false. Elapsed: 33.856927ms Oct 14 14:23:41.072: INFO: Pod "pod-secrets-01e862d7-2174-4bdd-9907-df8f76dc1196": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04169178s Oct 14 14:23:43.081: INFO: Pod "pod-secrets-01e862d7-2174-4bdd-9907-df8f76dc1196": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050659452s STEP: Saw pod success Oct 14 14:23:43.081: INFO: Pod "pod-secrets-01e862d7-2174-4bdd-9907-df8f76dc1196" satisfied condition "Succeeded or Failed" Oct 14 14:23:43.086: INFO: Trying to get logs from node latest-worker pod pod-secrets-01e862d7-2174-4bdd-9907-df8f76dc1196 container secret-volume-test: STEP: delete the pod Oct 14 14:23:43.172: INFO: Waiting for pod pod-secrets-01e862d7-2174-4bdd-9907-df8f76dc1196 to disappear Oct 14 14:23:43.194: INFO: Pod pod-secrets-01e862d7-2174-4bdd-9907-df8f76dc1196 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:23:43.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3453" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":113,"skipped":1634,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-instrumentation] Events API should delete a collection of events [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-instrumentation] Events API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:23:43.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should delete a collection of events [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of events STEP: get a list of Events with a label in the current namespace STEP: delete a list of events Oct 14 14:23:43.473: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity [AfterEach] [sig-instrumentation] Events API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:23:43.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-4591" for this suite. •{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":303,"completed":114,"skipped":1646,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:23:43.606: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 14:23:43.713: INFO: Creating deployment "test-recreate-deployment" Oct 14 14:23:43.737: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Oct 14 14:23:43.813: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Oct 14 14:23:46.160: INFO: Waiting deployment "test-recreate-deployment" to complete Oct 14 14:23:46.165: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738282223, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738282223, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738282223, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738282223, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-c96cf48f\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 14 14:23:48.172: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Oct 14 14:23:48.186: INFO: Updating deployment test-recreate-deployment Oct 14 14:23:48.186: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Oct 14 14:23:49.250: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-3374 /apis/apps/v1/namespaces/deployment-3374/deployments/test-recreate-deployment 4c6ff0f8-ecef-4f9e-aad2-8f8a1cf0e779 1140668 2 2020-10-14 14:23:43 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-10-14 14:23:48 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-10-14 14:23:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xa9f0c88 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-10-14 14:23:49 +0000 UTC,LastTransitionTime:2020-10-14 14:23:49 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-f79dd4667" is progressing.,LastUpdateTime:2020-10-14 14:23:49 +0000 UTC,LastTransitionTime:2020-10-14 14:23:43 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Oct 14 14:23:49.271: INFO: New ReplicaSet "test-recreate-deployment-f79dd4667" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-f79dd4667 deployment-3374 /apis/apps/v1/namespaces/deployment-3374/replicasets/test-recreate-deployment-f79dd4667 def1233f-7bbb-4946-84ab-5689d7d3ab21 1140665 1 2020-10-14 14:23:48 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 4c6ff0f8-ecef-4f9e-aad2-8f8a1cf0e779 0xaa017a0 0xaa017a1}] [] [{kube-controller-manager Update apps/v1 2020-10-14 14:23:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4c6ff0f8-ecef-4f9e-aad2-8f8a1cf0e779\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: f79dd4667,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xaa01818 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Oct 14 14:23:49.272: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Oct 14 14:23:49.273: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-c96cf48f deployment-3374 /apis/apps/v1/namespaces/deployment-3374/replicasets/test-recreate-deployment-c96cf48f c84a54d4-fe66-471f-af26-948ae3b9aea1 1140656 2 2020-10-14 14:23:43 +0000 UTC map[name:sample-pod-3 pod-template-hash:c96cf48f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 4c6ff0f8-ecef-4f9e-aad2-8f8a1cf0e779 0xaa016af 0xaa016c0}] [] [{kube-controller-manager Update apps/v1 2020-10-14 14:23:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4c6ff0f8-ecef-4f9e-aad2-8f8a1cf0e779\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: c96cf48f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:c96cf48f] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xaa01738 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Oct 14 14:23:49.302: INFO: Pod "test-recreate-deployment-f79dd4667-cqfcc" is not available: &Pod{ObjectMeta:{test-recreate-deployment-f79dd4667-cqfcc test-recreate-deployment-f79dd4667- deployment-3374 /api/v1/namespaces/deployment-3374/pods/test-recreate-deployment-f79dd4667-cqfcc ac7543ae-3e50-4084-a84f-293853a2b090 1140670 0 2020-10-14 14:23:48 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [{apps/v1 ReplicaSet test-recreate-deployment-f79dd4667 def1233f-7bbb-4946-84ab-5689d7d3ab21 0xaa01c90 0xaa01c91}] [] [{kube-controller-manager Update v1 2020-10-14 14:23:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"def1233f-7bbb-4946-84ab-5689d7d3ab21\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-14 14:23:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z5428,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z5428,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z5428,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 14:23:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 14:23:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 14:23:49 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 14:23:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-10-14 14:23:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:23:49.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3374" for this suite. • [SLOW TEST:5.732 seconds] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":303,"completed":115,"skipped":1670,"failed":0} SSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:23:49.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8800.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8800.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8800.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8800.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 14 14:23:58.015: INFO: DNS probes using dns-test-2fe6ebeb-0638-4c7b-97b6-ac33172f6961 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8800.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8800.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8800.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8800.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 14 14:24:04.157: INFO: File wheezy_udp@dns-test-service-3.dns-8800.svc.cluster.local from pod dns-8800/dns-test-2a9188f7-d84b-40e8-9d04-8f0feee7c7b0 contains 'foo.example.com. ' instead of 'bar.example.com.' Oct 14 14:24:04.162: INFO: File jessie_udp@dns-test-service-3.dns-8800.svc.cluster.local from pod dns-8800/dns-test-2a9188f7-d84b-40e8-9d04-8f0feee7c7b0 contains 'foo.example.com. ' instead of 'bar.example.com.' Oct 14 14:24:04.162: INFO: Lookups using dns-8800/dns-test-2a9188f7-d84b-40e8-9d04-8f0feee7c7b0 failed for: [wheezy_udp@dns-test-service-3.dns-8800.svc.cluster.local jessie_udp@dns-test-service-3.dns-8800.svc.cluster.local] Oct 14 14:24:09.171: INFO: File wheezy_udp@dns-test-service-3.dns-8800.svc.cluster.local from pod dns-8800/dns-test-2a9188f7-d84b-40e8-9d04-8f0feee7c7b0 contains 'foo.example.com. ' instead of 'bar.example.com.' Oct 14 14:24:09.176: INFO: File jessie_udp@dns-test-service-3.dns-8800.svc.cluster.local from pod dns-8800/dns-test-2a9188f7-d84b-40e8-9d04-8f0feee7c7b0 contains 'foo.example.com. ' instead of 'bar.example.com.' Oct 14 14:24:09.176: INFO: Lookups using dns-8800/dns-test-2a9188f7-d84b-40e8-9d04-8f0feee7c7b0 failed for: [wheezy_udp@dns-test-service-3.dns-8800.svc.cluster.local jessie_udp@dns-test-service-3.dns-8800.svc.cluster.local] Oct 14 14:24:14.169: INFO: File wheezy_udp@dns-test-service-3.dns-8800.svc.cluster.local from pod dns-8800/dns-test-2a9188f7-d84b-40e8-9d04-8f0feee7c7b0 contains 'foo.example.com. ' instead of 'bar.example.com.' Oct 14 14:24:14.174: INFO: File jessie_udp@dns-test-service-3.dns-8800.svc.cluster.local from pod dns-8800/dns-test-2a9188f7-d84b-40e8-9d04-8f0feee7c7b0 contains 'foo.example.com. ' instead of 'bar.example.com.' Oct 14 14:24:14.174: INFO: Lookups using dns-8800/dns-test-2a9188f7-d84b-40e8-9d04-8f0feee7c7b0 failed for: [wheezy_udp@dns-test-service-3.dns-8800.svc.cluster.local jessie_udp@dns-test-service-3.dns-8800.svc.cluster.local] Oct 14 14:24:19.170: INFO: File wheezy_udp@dns-test-service-3.dns-8800.svc.cluster.local from pod dns-8800/dns-test-2a9188f7-d84b-40e8-9d04-8f0feee7c7b0 contains 'foo.example.com. ' instead of 'bar.example.com.' Oct 14 14:24:19.174: INFO: File jessie_udp@dns-test-service-3.dns-8800.svc.cluster.local from pod dns-8800/dns-test-2a9188f7-d84b-40e8-9d04-8f0feee7c7b0 contains 'foo.example.com. ' instead of 'bar.example.com.' Oct 14 14:24:19.174: INFO: Lookups using dns-8800/dns-test-2a9188f7-d84b-40e8-9d04-8f0feee7c7b0 failed for: [wheezy_udp@dns-test-service-3.dns-8800.svc.cluster.local jessie_udp@dns-test-service-3.dns-8800.svc.cluster.local] Oct 14 14:24:24.170: INFO: File wheezy_udp@dns-test-service-3.dns-8800.svc.cluster.local from pod dns-8800/dns-test-2a9188f7-d84b-40e8-9d04-8f0feee7c7b0 contains 'foo.example.com. ' instead of 'bar.example.com.' Oct 14 14:24:24.174: INFO: File jessie_udp@dns-test-service-3.dns-8800.svc.cluster.local from pod dns-8800/dns-test-2a9188f7-d84b-40e8-9d04-8f0feee7c7b0 contains 'foo.example.com. ' instead of 'bar.example.com.' Oct 14 14:24:24.174: INFO: Lookups using dns-8800/dns-test-2a9188f7-d84b-40e8-9d04-8f0feee7c7b0 failed for: [wheezy_udp@dns-test-service-3.dns-8800.svc.cluster.local jessie_udp@dns-test-service-3.dns-8800.svc.cluster.local] Oct 14 14:24:29.175: INFO: DNS probes using dns-test-2a9188f7-d84b-40e8-9d04-8f0feee7c7b0 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8800.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-8800.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8800.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-8800.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 14 14:24:36.022: INFO: DNS probes using dns-test-a9ad6ebc-d946-422f-9af1-bd80a350bb8b succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:24:36.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8800" for this suite. • [SLOW TEST:46.837 seconds] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":303,"completed":116,"skipped":1675,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:24:36.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 14:24:36.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Oct 14 14:24:37.268: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-10-14T14:24:37Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-10-14T14:24:37Z]] name:name1 resourceVersion:1140926 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:be990686-763c-4cda-8ac5-0221b7368394] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Oct 14 14:24:47.282: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-10-14T14:24:47Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-10-14T14:24:47Z]] name:name2 resourceVersion:1141000 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:6a671dcf-d339-4f5c-8df4-f4d244c82660] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Oct 14 14:24:57.317: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-10-14T14:24:37Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-10-14T14:24:57Z]] name:name1 resourceVersion:1141032 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:be990686-763c-4cda-8ac5-0221b7368394] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Oct 14 14:25:07.328: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-10-14T14:24:47Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-10-14T14:25:07Z]] name:name2 resourceVersion:1141060 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:6a671dcf-d339-4f5c-8df4-f4d244c82660] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Oct 14 14:25:17.358: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-10-14T14:24:37Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-10-14T14:24:57Z]] name:name1 resourceVersion:1141087 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:be990686-763c-4cda-8ac5-0221b7368394] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Oct 14 14:25:27.374: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-10-14T14:24:47Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-10-14T14:25:07Z]] name:name2 resourceVersion:1141118 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:6a671dcf-d339-4f5c-8df4-f4d244c82660] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:25:37.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-5856" for this suite. • [SLOW TEST:61.730 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":303,"completed":117,"skipped":1691,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:25:37.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Oct 14 14:25:42.580: INFO: Successfully updated pod "labelsupdateaedeb7a6-fc95-4809-aa0a-e42d1eb159e8" [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:25:44.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1327" for this suite. • [SLOW TEST:6.716 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":303,"completed":118,"skipped":1693,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:25:44.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 14:25:44.767: INFO: The status of Pod test-webserver-23b10982-c8bc-482c-ba44-8dc206246703 is Pending, waiting for it to be Running (with Ready = true) Oct 14 14:25:46.774: INFO: The status of Pod test-webserver-23b10982-c8bc-482c-ba44-8dc206246703 is Pending, waiting for it to be Running (with Ready = true) Oct 14 14:25:48.774: INFO: The status of Pod test-webserver-23b10982-c8bc-482c-ba44-8dc206246703 is Running (Ready = false) Oct 14 14:25:50.775: INFO: The status of Pod test-webserver-23b10982-c8bc-482c-ba44-8dc206246703 is Running (Ready = false) Oct 14 14:25:52.775: INFO: The status of Pod test-webserver-23b10982-c8bc-482c-ba44-8dc206246703 is Running (Ready = false) Oct 14 14:25:54.774: INFO: The status of Pod test-webserver-23b10982-c8bc-482c-ba44-8dc206246703 is Running (Ready = false) Oct 14 14:25:56.775: INFO: The status of Pod test-webserver-23b10982-c8bc-482c-ba44-8dc206246703 is Running (Ready = false) Oct 14 14:25:58.775: INFO: The status of Pod test-webserver-23b10982-c8bc-482c-ba44-8dc206246703 is Running (Ready = false) Oct 14 14:26:00.775: INFO: The status of Pod test-webserver-23b10982-c8bc-482c-ba44-8dc206246703 is Running (Ready = false) Oct 14 14:26:02.778: INFO: The status of Pod test-webserver-23b10982-c8bc-482c-ba44-8dc206246703 is Running (Ready = false) Oct 14 14:26:04.775: INFO: The status of Pod test-webserver-23b10982-c8bc-482c-ba44-8dc206246703 is Running (Ready = false) Oct 14 14:26:06.774: INFO: The status of Pod test-webserver-23b10982-c8bc-482c-ba44-8dc206246703 is Running (Ready = false) Oct 14 14:26:08.774: INFO: The status of Pod test-webserver-23b10982-c8bc-482c-ba44-8dc206246703 is Running (Ready = false) Oct 14 14:26:10.774: INFO: The status of Pod test-webserver-23b10982-c8bc-482c-ba44-8dc206246703 is Running (Ready = false) Oct 14 14:26:12.779: INFO: The status of Pod test-webserver-23b10982-c8bc-482c-ba44-8dc206246703 is Running (Ready = true) Oct 14 14:26:12.786: INFO: Container started at 2020-10-14 14:25:47 +0000 UTC, pod became ready at 2020-10-14 14:26:11 +0000 UTC [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:26:12.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2455" for this suite. • [SLOW TEST:28.176 seconds] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":303,"completed":119,"skipped":1722,"failed":0} [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:26:12.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-3476 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-3476 I1014 14:26:13.144468 11 runners.go:190] Created replication controller with name: externalname-service, namespace: services-3476, replica count: 2 I1014 14:26:16.196170 11 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1014 14:26:19.197089 11 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 14 14:26:19.197: INFO: Creating new exec pod Oct 14 14:26:26.237: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-3476 execpod2mp6n -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Oct 14 14:26:27.734: INFO: stderr: "I1014 14:26:27.611768 1628 log.go:181] (0x309efc0) (0x309f030) Create stream\nI1014 14:26:27.614555 1628 log.go:181] (0x309efc0) (0x309f030) Stream added, broadcasting: 1\nI1014 14:26:27.624079 1628 log.go:181] (0x309efc0) Reply frame received for 1\nI1014 14:26:27.624657 1628 log.go:181] (0x309efc0) (0x247cfc0) Create stream\nI1014 14:26:27.624740 1628 log.go:181] (0x309efc0) (0x247cfc0) Stream added, broadcasting: 3\nI1014 14:26:27.626405 1628 log.go:181] (0x309efc0) Reply frame received for 3\nI1014 14:26:27.626674 1628 log.go:181] (0x309efc0) (0x309f1f0) Create stream\nI1014 14:26:27.626733 1628 log.go:181] (0x309efc0) (0x309f1f0) Stream added, broadcasting: 5\nI1014 14:26:27.627975 1628 log.go:181] (0x309efc0) Reply frame received for 5\nI1014 14:26:27.717286 1628 log.go:181] (0x309efc0) Data frame received for 3\nI1014 14:26:27.718037 1628 log.go:181] (0x309efc0) Data frame received for 1\nI1014 14:26:27.720175 1628 log.go:181] (0x309efc0) Data frame received for 5\nI1014 14:26:27.720281 1628 log.go:181] (0x247cfc0) (3) Data frame handling\nI1014 14:26:27.720623 1628 log.go:181] (0x309f030) (1) Data frame handling\nI1014 14:26:27.720975 1628 log.go:181] (0x309f1f0) (5) Data frame handling\nI1014 14:26:27.722532 1628 log.go:181] (0x309f1f0) (5) Data frame sent\nI1014 14:26:27.722761 1628 log.go:181] (0x309f030) (1) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI1014 14:26:27.723673 1628 log.go:181] (0x309efc0) Data frame received for 5\nI1014 14:26:27.723747 1628 log.go:181] (0x309f1f0) (5) Data frame handling\nI1014 14:26:27.723854 1628 log.go:181] (0x309f1f0) (5) Data frame sent\nI1014 14:26:27.723923 1628 log.go:181] (0x309efc0) Data frame received for 5\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI1014 14:26:27.724091 1628 log.go:181] (0x309efc0) (0x309f030) Stream removed, broadcasting: 1\nI1014 14:26:27.724383 1628 log.go:181] (0x309f1f0) (5) Data frame handling\nI1014 14:26:27.724614 1628 log.go:181] (0x309efc0) Go away received\nI1014 14:26:27.726711 1628 log.go:181] (0x309efc0) (0x309f030) Stream removed, broadcasting: 1\nI1014 14:26:27.726866 1628 log.go:181] (0x309efc0) (0x247cfc0) Stream removed, broadcasting: 3\nI1014 14:26:27.726994 1628 log.go:181] (0x309efc0) (0x309f1f0) Stream removed, broadcasting: 5\n" Oct 14 14:26:27.735: INFO: stdout: "" Oct 14 14:26:27.739: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-3476 execpod2mp6n -- /bin/sh -x -c nc -zv -t -w 2 10.107.10.12 80' Oct 14 14:26:29.238: INFO: stderr: "I1014 14:26:29.109737 1648 log.go:181] (0x3102150) (0x31021c0) Create stream\nI1014 14:26:29.113540 1648 log.go:181] (0x3102150) (0x31021c0) Stream added, broadcasting: 1\nI1014 14:26:29.124050 1648 log.go:181] (0x3102150) Reply frame received for 1\nI1014 14:26:29.124771 1648 log.go:181] (0x3102150) (0x2954070) Create stream\nI1014 14:26:29.124928 1648 log.go:181] (0x3102150) (0x2954070) Stream added, broadcasting: 3\nI1014 14:26:29.126344 1648 log.go:181] (0x3102150) Reply frame received for 3\nI1014 14:26:29.126535 1648 log.go:181] (0x3102150) (0x251a850) Create stream\nI1014 14:26:29.126589 1648 log.go:181] (0x3102150) (0x251a850) Stream added, broadcasting: 5\nI1014 14:26:29.127734 1648 log.go:181] (0x3102150) Reply frame received for 5\nI1014 14:26:29.218887 1648 log.go:181] (0x3102150) Data frame received for 5\nI1014 14:26:29.219252 1648 log.go:181] (0x3102150) Data frame received for 3\nI1014 14:26:29.219505 1648 log.go:181] (0x3102150) Data frame received for 1\nI1014 14:26:29.219740 1648 log.go:181] (0x251a850) (5) Data frame handling\nI1014 14:26:29.220756 1648 log.go:181] (0x2954070) (3) Data frame handling\nI1014 14:26:29.221232 1648 log.go:181] (0x31021c0) (1) Data frame handling\nI1014 14:26:29.223575 1648 log.go:181] (0x251a850) (5) Data frame sent\nI1014 14:26:29.223792 1648 log.go:181] (0x31021c0) (1) Data frame sent\nI1014 14:26:29.224213 1648 log.go:181] (0x3102150) Data frame received for 5\n+ nc -zv -t -w 2 10.107.10.12 80\nConnection to 10.107.10.12 80 port [tcp/http] succeeded!\nI1014 14:26:29.225503 1648 log.go:181] (0x3102150) (0x31021c0) Stream removed, broadcasting: 1\nI1014 14:26:29.225871 1648 log.go:181] (0x251a850) (5) Data frame handling\nI1014 14:26:29.226657 1648 log.go:181] (0x3102150) Go away received\nI1014 14:26:29.229862 1648 log.go:181] (0x3102150) (0x31021c0) Stream removed, broadcasting: 1\nI1014 14:26:29.230032 1648 log.go:181] (0x3102150) (0x2954070) Stream removed, broadcasting: 3\nI1014 14:26:29.230170 1648 log.go:181] (0x3102150) (0x251a850) Stream removed, broadcasting: 5\n" Oct 14 14:26:29.239: INFO: stdout: "" Oct 14 14:26:29.239: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-3476 execpod2mp6n -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.15 31904' Oct 14 14:26:30.799: INFO: stderr: "I1014 14:26:30.677700 1668 log.go:181] (0x2682c40) (0x2682d20) Create stream\nI1014 14:26:30.679821 1668 log.go:181] (0x2682c40) (0x2682d20) Stream added, broadcasting: 1\nI1014 14:26:30.693402 1668 log.go:181] (0x2682c40) Reply frame received for 1\nI1014 14:26:30.693967 1668 log.go:181] (0x2682c40) (0x26833b0) Create stream\nI1014 14:26:30.694036 1668 log.go:181] (0x2682c40) (0x26833b0) Stream added, broadcasting: 3\nI1014 14:26:30.695436 1668 log.go:181] (0x2682c40) Reply frame received for 3\nI1014 14:26:30.695696 1668 log.go:181] (0x2682c40) (0x2d2c000) Create stream\nI1014 14:26:30.695764 1668 log.go:181] (0x2682c40) (0x2d2c000) Stream added, broadcasting: 5\nI1014 14:26:30.696759 1668 log.go:181] (0x2682c40) Reply frame received for 5\nI1014 14:26:30.777929 1668 log.go:181] (0x2682c40) Data frame received for 3\nI1014 14:26:30.778486 1668 log.go:181] (0x2682c40) Data frame received for 5\nI1014 14:26:30.778760 1668 log.go:181] (0x2682c40) Data frame received for 1\nI1014 14:26:30.779215 1668 log.go:181] (0x26833b0) (3) Data frame handling\nI1014 14:26:30.779424 1668 log.go:181] (0x2d2c000) (5) Data frame handling\nI1014 14:26:30.779845 1668 log.go:181] (0x2682d20) (1) Data frame handling\n+ nc -zv -t -w 2 172.18.0.15 31904\nConnection to 172.18.0.15 31904 port [tcp/31904] succeeded!\nI1014 14:26:30.784314 1668 log.go:181] (0x2d2c000) (5) Data frame sent\nI1014 14:26:30.784956 1668 log.go:181] (0x2682d20) (1) Data frame sent\nI1014 14:26:30.785308 1668 log.go:181] (0x2682c40) Data frame received for 5\nI1014 14:26:30.785450 1668 log.go:181] (0x2d2c000) (5) Data frame handling\nI1014 14:26:30.786912 1668 log.go:181] (0x2682c40) (0x2682d20) Stream removed, broadcasting: 1\nI1014 14:26:30.787503 1668 log.go:181] (0x2682c40) Go away received\nI1014 14:26:30.790221 1668 log.go:181] (0x2682c40) (0x2682d20) Stream removed, broadcasting: 1\nI1014 14:26:30.790490 1668 log.go:181] (0x2682c40) (0x26833b0) Stream removed, broadcasting: 3\nI1014 14:26:30.790752 1668 log.go:181] (0x2682c40) (0x2d2c000) Stream removed, broadcasting: 5\n" Oct 14 14:26:30.800: INFO: stdout: "" Oct 14 14:26:30.801: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-3476 execpod2mp6n -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 31904' Oct 14 14:26:32.430: INFO: stderr: "I1014 14:26:32.297949 1689 log.go:181] (0x264c9a0) (0x264ccb0) Create stream\nI1014 14:26:32.302507 1689 log.go:181] (0x264c9a0) (0x264ccb0) Stream added, broadcasting: 1\nI1014 14:26:32.310891 1689 log.go:181] (0x264c9a0) Reply frame received for 1\nI1014 14:26:32.311355 1689 log.go:181] (0x264c9a0) (0x264cf50) Create stream\nI1014 14:26:32.311424 1689 log.go:181] (0x264c9a0) (0x264cf50) Stream added, broadcasting: 3\nI1014 14:26:32.312807 1689 log.go:181] (0x264c9a0) Reply frame received for 3\nI1014 14:26:32.313064 1689 log.go:181] (0x264c9a0) (0x25b1d50) Create stream\nI1014 14:26:32.313120 1689 log.go:181] (0x264c9a0) (0x25b1d50) Stream added, broadcasting: 5\nI1014 14:26:32.314444 1689 log.go:181] (0x264c9a0) Reply frame received for 5\nI1014 14:26:32.400710 1689 log.go:181] (0x264c9a0) Data frame received for 3\nI1014 14:26:32.401127 1689 log.go:181] (0x264cf50) (3) Data frame handling\nI1014 14:26:32.401288 1689 log.go:181] (0x264c9a0) Data frame received for 1\nI1014 14:26:32.401433 1689 log.go:181] (0x264ccb0) (1) Data frame handling\nI1014 14:26:32.401629 1689 log.go:181] (0x264c9a0) Data frame received for 5\nI1014 14:26:32.401814 1689 log.go:181] (0x25b1d50) (5) Data frame handling\nI1014 14:26:32.402566 1689 log.go:181] (0x264ccb0) (1) Data frame sent\nI1014 14:26:32.402707 1689 log.go:181] (0x25b1d50) (5) Data frame sent\nI1014 14:26:32.403240 1689 log.go:181] (0x264c9a0) Data frame received for 5\n+ nc -zv -t -w 2 172.18.0.14 31904\nConnection to 172.18.0.14 31904 port [tcp/31904] succeeded!\nI1014 14:26:32.404529 1689 log.go:181] (0x264c9a0) (0x264ccb0) Stream removed, broadcasting: 1\nI1014 14:26:32.405754 1689 log.go:181] (0x25b1d50) (5) Data frame handling\nI1014 14:26:32.406088 1689 log.go:181] (0x264c9a0) Go away received\nI1014 14:26:32.422676 1689 log.go:181] (0x264c9a0) (0x264ccb0) Stream removed, broadcasting: 1\nI1014 14:26:32.423042 1689 log.go:181] (0x264c9a0) (0x264cf50) Stream removed, broadcasting: 3\nI1014 14:26:32.423401 1689 log.go:181] (0x264c9a0) (0x25b1d50) Stream removed, broadcasting: 5\n" Oct 14 14:26:32.432: INFO: stdout: "" Oct 14 14:26:32.432: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:26:32.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3476" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:19.780 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":303,"completed":120,"skipped":1722,"failed":0} [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:26:32.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating server pod server in namespace prestop-301 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-301 STEP: Deleting pre-stop pod Oct 14 14:26:45.823: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:26:45.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-301" for this suite. • [SLOW TEST:13.289 seconds] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should call prestop when killing a pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":303,"completed":121,"skipped":1722,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:26:45.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Oct 14 14:26:46.247: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-8044 /api/v1/namespaces/watch-8044/configmaps/e2e-watch-test-resource-version 6c961373-95b5-4c2f-bd37-fa2dca3cf7a8 1141521 0 2020-10-14 14:26:45 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-10-14 14:26:46 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 14 14:26:46.249: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-8044 /api/v1/namespaces/watch-8044/configmaps/e2e-watch-test-resource-version 6c961373-95b5-4c2f-bd37-fa2dca3cf7a8 1141522 0 2020-10-14 14:26:45 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-10-14 14:26:46 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:26:46.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8044" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":303,"completed":122,"skipped":1749,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:26:46.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 14:26:46.534: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Oct 14 14:27:06.497: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1169 create -f -' Oct 14 14:27:11.540: INFO: stderr: "" Oct 14 14:27:11.540: INFO: stdout: "e2e-test-crd-publish-openapi-5723-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Oct 14 14:27:11.541: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1169 delete e2e-test-crd-publish-openapi-5723-crds test-cr' Oct 14 14:27:12.833: INFO: stderr: "" Oct 14 14:27:12.833: INFO: stdout: "e2e-test-crd-publish-openapi-5723-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Oct 14 14:27:12.833: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1169 apply -f -' Oct 14 14:27:15.325: INFO: stderr: "" Oct 14 14:27:15.325: INFO: stdout: "e2e-test-crd-publish-openapi-5723-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Oct 14 14:27:15.326: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1169 delete e2e-test-crd-publish-openapi-5723-crds test-cr' Oct 14 14:27:16.565: INFO: stderr: "" Oct 14 14:27:16.565: INFO: stdout: "e2e-test-crd-publish-openapi-5723-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Oct 14 14:27:16.566: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5723-crds' Oct 14 14:27:19.859: INFO: stderr: "" Oct 14 14:27:19.859: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5723-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:27:30.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1169" for this suite. • [SLOW TEST:43.956 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":303,"completed":123,"skipped":1752,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:27:30.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium Oct 14 14:27:30.398: INFO: Waiting up to 5m0s for pod "pod-5f6084dd-26a1-4eaa-bee9-d6f65823b426" in namespace "emptydir-7909" to be "Succeeded or Failed" Oct 14 14:27:30.423: INFO: Pod "pod-5f6084dd-26a1-4eaa-bee9-d6f65823b426": Phase="Pending", Reason="", readiness=false. Elapsed: 24.913914ms Oct 14 14:27:32.595: INFO: Pod "pod-5f6084dd-26a1-4eaa-bee9-d6f65823b426": Phase="Pending", Reason="", readiness=false. Elapsed: 2.197352508s Oct 14 14:27:34.602: INFO: Pod "pod-5f6084dd-26a1-4eaa-bee9-d6f65823b426": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.203795969s STEP: Saw pod success Oct 14 14:27:34.602: INFO: Pod "pod-5f6084dd-26a1-4eaa-bee9-d6f65823b426" satisfied condition "Succeeded or Failed" Oct 14 14:27:34.606: INFO: Trying to get logs from node latest-worker pod pod-5f6084dd-26a1-4eaa-bee9-d6f65823b426 container test-container: STEP: delete the pod Oct 14 14:27:34.662: INFO: Waiting for pod pod-5f6084dd-26a1-4eaa-bee9-d6f65823b426 to disappear Oct 14 14:27:34.732: INFO: Pod pod-5f6084dd-26a1-4eaa-bee9-d6f65823b426 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:27:34.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7909" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":124,"skipped":1788,"failed":0} SSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:27:34.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 14 14:27:34.854: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dd5e5640-5b9a-4af0-be2b-6f2ae4d31698" in namespace "downward-api-50" to be "Succeeded or Failed" Oct 14 14:27:34.861: INFO: Pod "downwardapi-volume-dd5e5640-5b9a-4af0-be2b-6f2ae4d31698": Phase="Pending", Reason="", readiness=false. Elapsed: 7.152638ms Oct 14 14:27:36.877: INFO: Pod "downwardapi-volume-dd5e5640-5b9a-4af0-be2b-6f2ae4d31698": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022358371s Oct 14 14:27:38.885: INFO: Pod "downwardapi-volume-dd5e5640-5b9a-4af0-be2b-6f2ae4d31698": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030313707s STEP: Saw pod success Oct 14 14:27:38.885: INFO: Pod "downwardapi-volume-dd5e5640-5b9a-4af0-be2b-6f2ae4d31698" satisfied condition "Succeeded or Failed" Oct 14 14:27:38.890: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-dd5e5640-5b9a-4af0-be2b-6f2ae4d31698 container client-container: STEP: delete the pod Oct 14 14:27:38.944: INFO: Waiting for pod downwardapi-volume-dd5e5640-5b9a-4af0-be2b-6f2ae4d31698 to disappear Oct 14 14:27:38.949: INFO: Pod downwardapi-volume-dd5e5640-5b9a-4af0-be2b-6f2ae4d31698 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:27:38.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-50" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":303,"completed":125,"skipped":1792,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:27:38.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:27:55.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1217" for this suite. • [SLOW TEST:16.260 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":303,"completed":126,"skipped":1826,"failed":0} SSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:27:55.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-6348 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-6348 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6348 Oct 14 14:27:55.415: INFO: Found 0 stateful pods, waiting for 1 Oct 14 14:28:05.778: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Oct 14 14:28:05.783: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6348 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 14 14:28:07.364: INFO: stderr: "I1014 14:28:07.205068 1810 log.go:181] (0x287b650) (0x287b6c0) Create stream\nI1014 14:28:07.208866 1810 log.go:181] (0x287b650) (0x287b6c0) Stream added, broadcasting: 1\nI1014 14:28:07.221389 1810 log.go:181] (0x287b650) Reply frame received for 1\nI1014 14:28:07.222071 1810 log.go:181] (0x287b650) (0x25b8070) Create stream\nI1014 14:28:07.222164 1810 log.go:181] (0x287b650) (0x25b8070) Stream added, broadcasting: 3\nI1014 14:28:07.223899 1810 log.go:181] (0x287b650) Reply frame received for 3\nI1014 14:28:07.224329 1810 log.go:181] (0x287b650) (0x3010070) Create stream\nI1014 14:28:07.224475 1810 log.go:181] (0x287b650) (0x3010070) Stream added, broadcasting: 5\nI1014 14:28:07.227252 1810 log.go:181] (0x287b650) Reply frame received for 5\nI1014 14:28:07.315735 1810 log.go:181] (0x287b650) Data frame received for 5\nI1014 14:28:07.315926 1810 log.go:181] (0x3010070) (5) Data frame handling\nI1014 14:28:07.316263 1810 log.go:181] (0x3010070) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1014 14:28:07.348737 1810 log.go:181] (0x287b650) Data frame received for 3\nI1014 14:28:07.348996 1810 log.go:181] (0x25b8070) (3) Data frame handling\nI1014 14:28:07.349211 1810 log.go:181] (0x287b650) Data frame received for 5\nI1014 14:28:07.349422 1810 log.go:181] (0x3010070) (5) Data frame handling\nI1014 14:28:07.349586 1810 log.go:181] (0x25b8070) (3) Data frame sent\nI1014 14:28:07.349681 1810 log.go:181] (0x287b650) Data frame received for 3\nI1014 14:28:07.349747 1810 log.go:181] (0x25b8070) (3) Data frame handling\nI1014 14:28:07.350219 1810 log.go:181] (0x287b650) Data frame received for 1\nI1014 14:28:07.350321 1810 log.go:181] (0x287b6c0) (1) Data frame handling\nI1014 14:28:07.350398 1810 log.go:181] (0x287b6c0) (1) Data frame sent\nI1014 14:28:07.351176 1810 log.go:181] (0x287b650) (0x287b6c0) Stream removed, broadcasting: 1\nI1014 14:28:07.352677 1810 log.go:181] (0x287b650) Go away received\nI1014 14:28:07.355888 1810 log.go:181] (0x287b650) (0x287b6c0) Stream removed, broadcasting: 1\nI1014 14:28:07.356090 1810 log.go:181] (0x287b650) (0x25b8070) Stream removed, broadcasting: 3\nI1014 14:28:07.356264 1810 log.go:181] (0x287b650) (0x3010070) Stream removed, broadcasting: 5\n" Oct 14 14:28:07.365: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 14 14:28:07.365: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 14 14:28:07.374: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Oct 14 14:28:17.384: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Oct 14 14:28:17.384: INFO: Waiting for statefulset status.replicas updated to 0 Oct 14 14:28:17.412: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999987914s Oct 14 14:28:18.421: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.989674489s Oct 14 14:28:19.431: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.980485534s Oct 14 14:28:20.440: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.970803092s Oct 14 14:28:21.449: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.961784787s Oct 14 14:28:22.457: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.952690114s Oct 14 14:28:23.470: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.944461542s Oct 14 14:28:24.499: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.931569147s Oct 14 14:28:25.506: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.902012251s Oct 14 14:28:26.514: INFO: Verifying statefulset ss doesn't scale past 1 for another 895.472213ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6348 Oct 14 14:28:27.523: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6348 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 14:28:29.100: INFO: stderr: "I1014 14:28:28.974231 1830 log.go:181] (0x318a000) (0x318a070) Create stream\nI1014 14:28:28.977535 1830 log.go:181] (0x318a000) (0x318a070) Stream added, broadcasting: 1\nI1014 14:28:28.989237 1830 log.go:181] (0x318a000) Reply frame received for 1\nI1014 14:28:28.990586 1830 log.go:181] (0x318a000) (0x309a000) Create stream\nI1014 14:28:28.990742 1830 log.go:181] (0x318a000) (0x309a000) Stream added, broadcasting: 3\nI1014 14:28:28.992942 1830 log.go:181] (0x318a000) Reply frame received for 3\nI1014 14:28:28.993371 1830 log.go:181] (0x318a000) (0x309a230) Create stream\nI1014 14:28:28.993481 1830 log.go:181] (0x318a000) (0x309a230) Stream added, broadcasting: 5\nI1014 14:28:28.995131 1830 log.go:181] (0x318a000) Reply frame received for 5\nI1014 14:28:29.080455 1830 log.go:181] (0x318a000) Data frame received for 3\nI1014 14:28:29.081361 1830 log.go:181] (0x309a000) (3) Data frame handling\nI1014 14:28:29.082193 1830 log.go:181] (0x318a000) Data frame received for 5\nI1014 14:28:29.082459 1830 log.go:181] (0x309a230) (5) Data frame handling\nI1014 14:28:29.082942 1830 log.go:181] (0x318a000) Data frame received for 1\nI1014 14:28:29.083156 1830 log.go:181] (0x318a070) (1) Data frame handling\nI1014 14:28:29.083549 1830 log.go:181] (0x309a000) (3) Data frame sent\nI1014 14:28:29.083833 1830 log.go:181] (0x309a230) (5) Data frame sent\nI1014 14:28:29.084031 1830 log.go:181] (0x318a070) (1) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI1014 14:28:29.084680 1830 log.go:181] (0x318a000) Data frame received for 5\nI1014 14:28:29.084797 1830 log.go:181] (0x309a230) (5) Data frame handling\nI1014 14:28:29.085552 1830 log.go:181] (0x318a000) Data frame received for 3\nI1014 14:28:29.085665 1830 log.go:181] (0x309a000) (3) Data frame handling\nI1014 14:28:29.086992 1830 log.go:181] (0x318a000) (0x318a070) Stream removed, broadcasting: 1\nI1014 14:28:29.087624 1830 log.go:181] (0x318a000) Go away received\nI1014 14:28:29.090137 1830 log.go:181] (0x318a000) (0x318a070) Stream removed, broadcasting: 1\nI1014 14:28:29.090359 1830 log.go:181] (0x318a000) (0x309a000) Stream removed, broadcasting: 3\nI1014 14:28:29.090530 1830 log.go:181] (0x318a000) (0x309a230) Stream removed, broadcasting: 5\n" Oct 14 14:28:29.101: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 14 14:28:29.101: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 14 14:28:29.107: INFO: Found 1 stateful pods, waiting for 3 Oct 14 14:28:39.119: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Oct 14 14:28:39.120: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Oct 14 14:28:39.120: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Oct 14 14:28:39.137: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6348 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 14 14:28:40.681: INFO: stderr: "I1014 14:28:40.580638 1851 log.go:181] (0x2f8e000) (0x2f8e070) Create stream\nI1014 14:28:40.585638 1851 log.go:181] (0x2f8e000) (0x2f8e070) Stream added, broadcasting: 1\nI1014 14:28:40.597020 1851 log.go:181] (0x2f8e000) Reply frame received for 1\nI1014 14:28:40.597996 1851 log.go:181] (0x2f8e000) (0x26d22a0) Create stream\nI1014 14:28:40.598152 1851 log.go:181] (0x2f8e000) (0x26d22a0) Stream added, broadcasting: 3\nI1014 14:28:40.600221 1851 log.go:181] (0x2f8e000) Reply frame received for 3\nI1014 14:28:40.600578 1851 log.go:181] (0x2f8e000) (0x2f8e2a0) Create stream\nI1014 14:28:40.600670 1851 log.go:181] (0x2f8e000) (0x2f8e2a0) Stream added, broadcasting: 5\nI1014 14:28:40.602333 1851 log.go:181] (0x2f8e000) Reply frame received for 5\nI1014 14:28:40.666656 1851 log.go:181] (0x2f8e000) Data frame received for 3\nI1014 14:28:40.666926 1851 log.go:181] (0x2f8e000) Data frame received for 1\nI1014 14:28:40.667022 1851 log.go:181] (0x26d22a0) (3) Data frame handling\nI1014 14:28:40.667188 1851 log.go:181] (0x2f8e000) Data frame received for 5\nI1014 14:28:40.667254 1851 log.go:181] (0x2f8e2a0) (5) Data frame handling\nI1014 14:28:40.667345 1851 log.go:181] (0x2f8e070) (1) Data frame handling\nI1014 14:28:40.667899 1851 log.go:181] (0x26d22a0) (3) Data frame sent\nI1014 14:28:40.668230 1851 log.go:181] (0x2f8e070) (1) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1014 14:28:40.668403 1851 log.go:181] (0x2f8e2a0) (5) Data frame sent\nI1014 14:28:40.668625 1851 log.go:181] (0x2f8e000) Data frame received for 5\nI1014 14:28:40.668683 1851 log.go:181] (0x2f8e2a0) (5) Data frame handling\nI1014 14:28:40.669578 1851 log.go:181] (0x2f8e000) Data frame received for 3\nI1014 14:28:40.669839 1851 log.go:181] (0x2f8e000) (0x2f8e070) Stream removed, broadcasting: 1\nI1014 14:28:40.670729 1851 log.go:181] (0x26d22a0) (3) Data frame handling\nI1014 14:28:40.671226 1851 log.go:181] (0x2f8e000) Go away received\nI1014 14:28:40.673230 1851 log.go:181] (0x2f8e000) (0x2f8e070) Stream removed, broadcasting: 1\nI1014 14:28:40.673402 1851 log.go:181] (0x2f8e000) (0x26d22a0) Stream removed, broadcasting: 3\nI1014 14:28:40.673536 1851 log.go:181] (0x2f8e000) (0x2f8e2a0) Stream removed, broadcasting: 5\n" Oct 14 14:28:40.682: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 14 14:28:40.682: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 14 14:28:40.683: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6348 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 14 14:28:42.356: INFO: stderr: "I1014 14:28:42.196333 1871 log.go:181] (0x29a0e70) (0x29a0ee0) Create stream\nI1014 14:28:42.200053 1871 log.go:181] (0x29a0e70) (0x29a0ee0) Stream added, broadcasting: 1\nI1014 14:28:42.209432 1871 log.go:181] (0x29a0e70) Reply frame received for 1\nI1014 14:28:42.210405 1871 log.go:181] (0x29a0e70) (0x2946070) Create stream\nI1014 14:28:42.210519 1871 log.go:181] (0x29a0e70) (0x2946070) Stream added, broadcasting: 3\nI1014 14:28:42.212366 1871 log.go:181] (0x29a0e70) Reply frame received for 3\nI1014 14:28:42.212830 1871 log.go:181] (0x29a0e70) (0x29a10a0) Create stream\nI1014 14:28:42.213051 1871 log.go:181] (0x29a0e70) (0x29a10a0) Stream added, broadcasting: 5\nI1014 14:28:42.214702 1871 log.go:181] (0x29a0e70) Reply frame received for 5\nI1014 14:28:42.302041 1871 log.go:181] (0x29a0e70) Data frame received for 5\nI1014 14:28:42.302243 1871 log.go:181] (0x29a10a0) (5) Data frame handling\nI1014 14:28:42.302546 1871 log.go:181] (0x29a10a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1014 14:28:42.338498 1871 log.go:181] (0x29a0e70) Data frame received for 3\nI1014 14:28:42.338738 1871 log.go:181] (0x2946070) (3) Data frame handling\nI1014 14:28:42.338911 1871 log.go:181] (0x29a0e70) Data frame received for 5\nI1014 14:28:42.339086 1871 log.go:181] (0x29a10a0) (5) Data frame handling\nI1014 14:28:42.339314 1871 log.go:181] (0x2946070) (3) Data frame sent\nI1014 14:28:42.339473 1871 log.go:181] (0x29a0e70) Data frame received for 3\nI1014 14:28:42.339627 1871 log.go:181] (0x2946070) (3) Data frame handling\nI1014 14:28:42.340308 1871 log.go:181] (0x29a0e70) Data frame received for 1\nI1014 14:28:42.340472 1871 log.go:181] (0x29a0ee0) (1) Data frame handling\nI1014 14:28:42.340615 1871 log.go:181] (0x29a0ee0) (1) Data frame sent\nI1014 14:28:42.341473 1871 log.go:181] (0x29a0e70) (0x29a0ee0) Stream removed, broadcasting: 1\nI1014 14:28:42.344170 1871 log.go:181] (0x29a0e70) Go away received\nI1014 14:28:42.346432 1871 log.go:181] (0x29a0e70) (0x29a0ee0) Stream removed, broadcasting: 1\nI1014 14:28:42.347203 1871 log.go:181] (0x29a0e70) (0x2946070) Stream removed, broadcasting: 3\nI1014 14:28:42.347393 1871 log.go:181] (0x29a0e70) (0x29a10a0) Stream removed, broadcasting: 5\n" Oct 14 14:28:42.357: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 14 14:28:42.357: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 14 14:28:42.358: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6348 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 14 14:28:43.931: INFO: stderr: "I1014 14:28:43.769161 1891 log.go:181] (0x2c582a0) (0x2c58310) Create stream\nI1014 14:28:43.771460 1891 log.go:181] (0x2c582a0) (0x2c58310) Stream added, broadcasting: 1\nI1014 14:28:43.781615 1891 log.go:181] (0x2c582a0) Reply frame received for 1\nI1014 14:28:43.782302 1891 log.go:181] (0x2c582a0) (0x2e28070) Create stream\nI1014 14:28:43.782395 1891 log.go:181] (0x2c582a0) (0x2e28070) Stream added, broadcasting: 3\nI1014 14:28:43.785037 1891 log.go:181] (0x2c582a0) Reply frame received for 3\nI1014 14:28:43.785597 1891 log.go:181] (0x2c582a0) (0x2c58540) Create stream\nI1014 14:28:43.785722 1891 log.go:181] (0x2c582a0) (0x2c58540) Stream added, broadcasting: 5\nI1014 14:28:43.787349 1891 log.go:181] (0x2c582a0) Reply frame received for 5\nI1014 14:28:43.889515 1891 log.go:181] (0x2c582a0) Data frame received for 5\nI1014 14:28:43.889914 1891 log.go:181] (0x2c58540) (5) Data frame handling\nI1014 14:28:43.890763 1891 log.go:181] (0x2c58540) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1014 14:28:43.914466 1891 log.go:181] (0x2c582a0) Data frame received for 3\nI1014 14:28:43.914764 1891 log.go:181] (0x2e28070) (3) Data frame handling\nI1014 14:28:43.914977 1891 log.go:181] (0x2c582a0) Data frame received for 5\nI1014 14:28:43.915222 1891 log.go:181] (0x2c58540) (5) Data frame handling\nI1014 14:28:43.915380 1891 log.go:181] (0x2e28070) (3) Data frame sent\nI1014 14:28:43.915559 1891 log.go:181] (0x2c582a0) Data frame received for 3\nI1014 14:28:43.915766 1891 log.go:181] (0x2e28070) (3) Data frame handling\nI1014 14:28:43.915949 1891 log.go:181] (0x2c582a0) Data frame received for 1\nI1014 14:28:43.916107 1891 log.go:181] (0x2c58310) (1) Data frame handling\nI1014 14:28:43.916258 1891 log.go:181] (0x2c58310) (1) Data frame sent\nI1014 14:28:43.918442 1891 log.go:181] (0x2c582a0) (0x2c58310) Stream removed, broadcasting: 1\nI1014 14:28:43.920252 1891 log.go:181] (0x2c582a0) Go away received\nI1014 14:28:43.923392 1891 log.go:181] (0x2c582a0) (0x2c58310) Stream removed, broadcasting: 1\nI1014 14:28:43.923554 1891 log.go:181] (0x2c582a0) (0x2e28070) Stream removed, broadcasting: 3\nI1014 14:28:43.923690 1891 log.go:181] (0x2c582a0) (0x2c58540) Stream removed, broadcasting: 5\n" Oct 14 14:28:43.933: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 14 14:28:43.933: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 14 14:28:43.933: INFO: Waiting for statefulset status.replicas updated to 0 Oct 14 14:28:43.939: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Oct 14 14:28:53.957: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Oct 14 14:28:53.957: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Oct 14 14:28:53.958: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Oct 14 14:28:53.979: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999987455s Oct 14 14:28:54.991: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.989523363s Oct 14 14:28:56.001: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.977520201s Oct 14 14:28:57.016: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.967701173s Oct 14 14:28:58.026: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.952790226s Oct 14 14:28:59.048: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.942762039s Oct 14 14:29:00.059: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.921207988s Oct 14 14:29:01.070: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.909681218s Oct 14 14:29:02.083: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.898718533s Oct 14 14:29:03.094: INFO: Verifying statefulset ss doesn't scale past 3 for another 886.240007ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6348 Oct 14 14:29:04.103: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6348 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 14:29:05.576: INFO: stderr: "I1014 14:29:05.445984 1911 log.go:181] (0x247e2a0) (0x247e310) Create stream\nI1014 14:29:05.447787 1911 log.go:181] (0x247e2a0) (0x247e310) Stream added, broadcasting: 1\nI1014 14:29:05.463480 1911 log.go:181] (0x247e2a0) Reply frame received for 1\nI1014 14:29:05.463920 1911 log.go:181] (0x247e2a0) (0x26440e0) Create stream\nI1014 14:29:05.463986 1911 log.go:181] (0x247e2a0) (0x26440e0) Stream added, broadcasting: 3\nI1014 14:29:05.465376 1911 log.go:181] (0x247e2a0) Reply frame received for 3\nI1014 14:29:05.465652 1911 log.go:181] (0x247e2a0) (0x247e1c0) Create stream\nI1014 14:29:05.465750 1911 log.go:181] (0x247e2a0) (0x247e1c0) Stream added, broadcasting: 5\nI1014 14:29:05.466828 1911 log.go:181] (0x247e2a0) Reply frame received for 5\nI1014 14:29:05.554922 1911 log.go:181] (0x247e2a0) Data frame received for 3\nI1014 14:29:05.555294 1911 log.go:181] (0x247e2a0) Data frame received for 5\nI1014 14:29:05.555469 1911 log.go:181] (0x247e1c0) (5) Data frame handling\nI1014 14:29:05.555569 1911 log.go:181] (0x26440e0) (3) Data frame handling\nI1014 14:29:05.555828 1911 log.go:181] (0x247e2a0) Data frame received for 1\nI1014 14:29:05.555983 1911 log.go:181] (0x247e310) (1) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI1014 14:29:05.557355 1911 log.go:181] (0x247e310) (1) Data frame sent\nI1014 14:29:05.557535 1911 log.go:181] (0x26440e0) (3) Data frame sent\nI1014 14:29:05.557713 1911 log.go:181] (0x247e1c0) (5) Data frame sent\nI1014 14:29:05.558841 1911 log.go:181] (0x247e2a0) Data frame received for 5\nI1014 14:29:05.558981 1911 log.go:181] (0x247e1c0) (5) Data frame handling\nI1014 14:29:05.559105 1911 log.go:181] (0x247e2a0) Data frame received for 3\nI1014 14:29:05.559830 1911 log.go:181] (0x247e2a0) (0x247e310) Stream removed, broadcasting: 1\nI1014 14:29:05.560491 1911 log.go:181] (0x26440e0) (3) Data frame handling\nI1014 14:29:05.561043 1911 log.go:181] (0x247e2a0) Go away received\nI1014 14:29:05.563613 1911 log.go:181] (0x247e2a0) (0x247e310) Stream removed, broadcasting: 1\nI1014 14:29:05.563879 1911 log.go:181] (0x247e2a0) (0x26440e0) Stream removed, broadcasting: 3\nI1014 14:29:05.564093 1911 log.go:181] (0x247e2a0) (0x247e1c0) Stream removed, broadcasting: 5\n" Oct 14 14:29:05.577: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 14 14:29:05.577: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 14 14:29:05.577: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6348 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 14:29:07.090: INFO: stderr: "I1014 14:29:06.964138 1931 log.go:181] (0x32190a0) (0x3219110) Create stream\nI1014 14:29:06.968563 1931 log.go:181] (0x32190a0) (0x3219110) Stream added, broadcasting: 1\nI1014 14:29:06.984351 1931 log.go:181] (0x32190a0) Reply frame received for 1\nI1014 14:29:06.985078 1931 log.go:181] (0x32190a0) (0x28ce380) Create stream\nI1014 14:29:06.985165 1931 log.go:181] (0x32190a0) (0x28ce380) Stream added, broadcasting: 3\nI1014 14:29:06.986916 1931 log.go:181] (0x32190a0) Reply frame received for 3\nI1014 14:29:06.987385 1931 log.go:181] (0x32190a0) (0x28ce690) Create stream\nI1014 14:29:06.987515 1931 log.go:181] (0x32190a0) (0x28ce690) Stream added, broadcasting: 5\nI1014 14:29:06.989361 1931 log.go:181] (0x32190a0) Reply frame received for 5\nI1014 14:29:07.071981 1931 log.go:181] (0x32190a0) Data frame received for 3\nI1014 14:29:07.072414 1931 log.go:181] (0x28ce380) (3) Data frame handling\nI1014 14:29:07.072755 1931 log.go:181] (0x32190a0) Data frame received for 1\nI1014 14:29:07.073010 1931 log.go:181] (0x3219110) (1) Data frame handling\nI1014 14:29:07.073178 1931 log.go:181] (0x32190a0) Data frame received for 5\nI1014 14:29:07.073385 1931 log.go:181] (0x28ce690) (5) Data frame handling\nI1014 14:29:07.074015 1931 log.go:181] (0x28ce690) (5) Data frame sent\nI1014 14:29:07.074257 1931 log.go:181] (0x3219110) (1) Data frame sent\nI1014 14:29:07.074398 1931 log.go:181] (0x28ce380) (3) Data frame sent\nI1014 14:29:07.074538 1931 log.go:181] (0x32190a0) Data frame received for 3\nI1014 14:29:07.074640 1931 log.go:181] (0x28ce380) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI1014 14:29:07.076133 1931 log.go:181] (0x32190a0) Data frame received for 5\nI1014 14:29:07.076798 1931 log.go:181] (0x32190a0) (0x3219110) Stream removed, broadcasting: 1\nI1014 14:29:07.077586 1931 log.go:181] (0x28ce690) (5) Data frame handling\nI1014 14:29:07.080051 1931 log.go:181] (0x32190a0) Go away received\nI1014 14:29:07.081924 1931 log.go:181] (0x32190a0) (0x3219110) Stream removed, broadcasting: 1\nI1014 14:29:07.082330 1931 log.go:181] (0x32190a0) (0x28ce380) Stream removed, broadcasting: 3\nI1014 14:29:07.082739 1931 log.go:181] (0x32190a0) (0x28ce690) Stream removed, broadcasting: 5\n" Oct 14 14:29:07.091: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 14 14:29:07.091: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 14 14:29:07.091: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6348 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 14:29:08.611: INFO: rc: 1 Oct 14 14:29:08.612: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6348 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Oct 14 14:29:18.613: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6348 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 14:29:19.799: INFO: rc: 1 Oct 14 14:29:19.799: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6348 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 14 14:29:29.800: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6348 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 14:29:31.107: INFO: rc: 1 Oct 14 14:29:31.108: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6348 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 14 14:29:41.109: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6348 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 14:29:42.366: INFO: rc: 1 Oct 14 14:29:42.367: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6348 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 14 14:29:52.368: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6348 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 14:29:53.615: INFO: rc: 1 Oct 14 14:29:53.615: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6348 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 14 14:30:03.616: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6348 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 14:30:04.813: INFO: rc: 1 Oct 14 14:30:04.813: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6348 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 14 14:30:14.814: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6348 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 14:30:16.048: INFO: rc: 1 Oct 14 14:30:16.049: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6348 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 14 14:30:26.050: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6348 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 14:30:27.368: INFO: rc: 1 Oct 14 14:30:27.368: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6348 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 14 14:30:37.369: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6348 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 14:30:38.629: INFO: rc: 1 Oct 14 14:30:38.629: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6348 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 14 14:30:48.630: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6348 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 14:30:49.817: INFO: rc: 1 Oct 14 14:30:49.818: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6348 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 14 14:30:59.819: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6348 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 14:31:01.061: INFO: rc: 1 Oct 14 14:31:01.061: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6348 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 14 14:31:11.062: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6348 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 14:31:12.280: INFO: rc: 1 Oct 14 14:31:12.281: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6348 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 14 14:31:22.282: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6348 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 14:31:23.486: INFO: rc: 1 Oct 14 14:31:23.486: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6348 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 14 14:31:33.487: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6348 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 14:31:34.738: INFO: rc: 1 Oct 14 14:31:34.739: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6348 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 14 14:31:44.740: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6348 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 14:31:45.989: INFO: rc: 1 Oct 14 14:31:45.990: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6348 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 14 14:31:55.991: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6348 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 14:31:57.227: INFO: rc: 1 Oct 14 14:31:57.228: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6348 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 14 14:32:07.228: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6348 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 14:32:08.517: INFO: rc: 1 Oct 14 14:32:08.517: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6348 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 14 14:32:18.519: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6348 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 14:32:19.764: INFO: rc: 1 Oct 14 14:32:19.765: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6348 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 14 14:32:29.766: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6348 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 14:32:31.021: INFO: rc: 1 Oct 14 14:32:31.021: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6348 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 14 14:32:41.022: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6348 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 14:32:42.197: INFO: rc: 1 Oct 14 14:32:42.198: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6348 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 14 14:32:52.199: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6348 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 14:32:53.475: INFO: rc: 1 Oct 14 14:32:53.475: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6348 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 14 14:33:03.476: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6348 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 14:33:04.711: INFO: rc: 1 Oct 14 14:33:04.712: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6348 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 14 14:33:14.713: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6348 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 14:33:16.059: INFO: rc: 1 Oct 14 14:33:16.059: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6348 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 14 14:33:26.061: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6348 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 14:33:27.310: INFO: rc: 1 Oct 14 14:33:27.310: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6348 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 14 14:33:37.311: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6348 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 14:33:38.529: INFO: rc: 1 Oct 14 14:33:38.530: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6348 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 14 14:33:48.531: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6348 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 14:33:49.779: INFO: rc: 1 Oct 14 14:33:49.779: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6348 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 14 14:33:59.781: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6348 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 14:34:01.007: INFO: rc: 1 Oct 14 14:34:01.007: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6348 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 14 14:34:11.008: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6348 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 14:34:12.265: INFO: rc: 1 Oct 14 14:34:12.265: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: Oct 14 14:34:12.266: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Oct 14 14:34:12.282: INFO: Deleting all statefulset in ns statefulset-6348 Oct 14 14:34:12.287: INFO: Scaling statefulset ss to 0 Oct 14 14:34:12.302: INFO: Waiting for statefulset status.replicas updated to 0 Oct 14 14:34:12.306: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:34:12.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6348" for this suite. • [SLOW TEST:377.110 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":303,"completed":127,"skipped":1834,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:34:12.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs Oct 14 14:34:12.441: INFO: Waiting up to 5m0s for pod "pod-1d5618df-9c5d-4945-9c29-26418412ed4b" in namespace "emptydir-1123" to be "Succeeded or Failed" Oct 14 14:34:12.462: INFO: Pod "pod-1d5618df-9c5d-4945-9c29-26418412ed4b": Phase="Pending", Reason="", readiness=false. Elapsed: 21.010354ms Oct 14 14:34:14.469: INFO: Pod "pod-1d5618df-9c5d-4945-9c29-26418412ed4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027989665s Oct 14 14:34:16.476: INFO: Pod "pod-1d5618df-9c5d-4945-9c29-26418412ed4b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035009249s STEP: Saw pod success Oct 14 14:34:16.477: INFO: Pod "pod-1d5618df-9c5d-4945-9c29-26418412ed4b" satisfied condition "Succeeded or Failed" Oct 14 14:34:16.483: INFO: Trying to get logs from node latest-worker pod pod-1d5618df-9c5d-4945-9c29-26418412ed4b container test-container: STEP: delete the pod Oct 14 14:34:16.594: INFO: Waiting for pod pod-1d5618df-9c5d-4945-9c29-26418412ed4b to disappear Oct 14 14:34:16.697: INFO: Pod pod-1d5618df-9c5d-4945-9c29-26418412ed4b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:34:16.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1123" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":128,"skipped":1845,"failed":0} SSSSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:34:16.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should find a service from listing all namespaces [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching services [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:34:16.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7879" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":303,"completed":129,"skipped":1852,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:34:16.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-e404492f-6e93-4bda-ba6b-ee884812cde1 STEP: Creating a pod to test consume configMaps Oct 14 14:34:17.100: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7d3181e2-a330-4036-bdc6-4d0f589f0717" in namespace "projected-1157" to be "Succeeded or Failed" Oct 14 14:34:17.118: INFO: Pod "pod-projected-configmaps-7d3181e2-a330-4036-bdc6-4d0f589f0717": Phase="Pending", Reason="", readiness=false. Elapsed: 18.127793ms Oct 14 14:34:19.126: INFO: Pod "pod-projected-configmaps-7d3181e2-a330-4036-bdc6-4d0f589f0717": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025955459s Oct 14 14:34:21.134: INFO: Pod "pod-projected-configmaps-7d3181e2-a330-4036-bdc6-4d0f589f0717": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034353894s STEP: Saw pod success Oct 14 14:34:21.134: INFO: Pod "pod-projected-configmaps-7d3181e2-a330-4036-bdc6-4d0f589f0717" satisfied condition "Succeeded or Failed" Oct 14 14:34:21.140: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-7d3181e2-a330-4036-bdc6-4d0f589f0717 container projected-configmap-volume-test: STEP: delete the pod Oct 14 14:34:21.202: INFO: Waiting for pod pod-projected-configmaps-7d3181e2-a330-4036-bdc6-4d0f589f0717 to disappear Oct 14 14:34:21.212: INFO: Pod pod-projected-configmaps-7d3181e2-a330-4036-bdc6-4d0f589f0717 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:34:21.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1157" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":130,"skipped":1869,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:34:21.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 14 14:34:21.340: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0a3bce5b-c919-498d-9f5b-eb1838126d21" in namespace "projected-3822" to be "Succeeded or Failed" Oct 14 14:34:21.359: INFO: Pod "downwardapi-volume-0a3bce5b-c919-498d-9f5b-eb1838126d21": Phase="Pending", Reason="", readiness=false. Elapsed: 18.904088ms Oct 14 14:34:23.369: INFO: Pod "downwardapi-volume-0a3bce5b-c919-498d-9f5b-eb1838126d21": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028779938s Oct 14 14:34:25.378: INFO: Pod "downwardapi-volume-0a3bce5b-c919-498d-9f5b-eb1838126d21": Phase="Running", Reason="", readiness=true. Elapsed: 4.037777172s Oct 14 14:34:27.387: INFO: Pod "downwardapi-volume-0a3bce5b-c919-498d-9f5b-eb1838126d21": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.045998237s STEP: Saw pod success Oct 14 14:34:27.387: INFO: Pod "downwardapi-volume-0a3bce5b-c919-498d-9f5b-eb1838126d21" satisfied condition "Succeeded or Failed" Oct 14 14:34:27.392: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-0a3bce5b-c919-498d-9f5b-eb1838126d21 container client-container: STEP: delete the pod Oct 14 14:34:27.459: INFO: Waiting for pod downwardapi-volume-0a3bce5b-c919-498d-9f5b-eb1838126d21 to disappear Oct 14 14:34:27.470: INFO: Pod downwardapi-volume-0a3bce5b-c919-498d-9f5b-eb1838126d21 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:34:27.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3822" for this suite. • [SLOW TEST:6.261 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":131,"skipped":1872,"failed":0} SSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:34:27.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-8227/configmap-test-04b61922-c72d-4b68-a6d2-4fafa062bc20 STEP: Creating a pod to test consume configMaps Oct 14 14:34:27.557: INFO: Waiting up to 5m0s for pod "pod-configmaps-c6a80edd-5974-4d35-8537-ef5f545e598f" in namespace "configmap-8227" to be "Succeeded or Failed" Oct 14 14:34:27.596: INFO: Pod "pod-configmaps-c6a80edd-5974-4d35-8537-ef5f545e598f": Phase="Pending", Reason="", readiness=false. Elapsed: 38.760119ms Oct 14 14:34:29.604: INFO: Pod "pod-configmaps-c6a80edd-5974-4d35-8537-ef5f545e598f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04671817s Oct 14 14:34:31.612: INFO: Pod "pod-configmaps-c6a80edd-5974-4d35-8537-ef5f545e598f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054344602s STEP: Saw pod success Oct 14 14:34:31.612: INFO: Pod "pod-configmaps-c6a80edd-5974-4d35-8537-ef5f545e598f" satisfied condition "Succeeded or Failed" Oct 14 14:34:31.618: INFO: Trying to get logs from node latest-worker pod pod-configmaps-c6a80edd-5974-4d35-8537-ef5f545e598f container env-test: STEP: delete the pod Oct 14 14:34:31.772: INFO: Waiting for pod pod-configmaps-c6a80edd-5974-4d35-8537-ef5f545e598f to disappear Oct 14 14:34:31.866: INFO: Pod pod-configmaps-c6a80edd-5974-4d35-8537-ef5f545e598f no longer exists [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:34:31.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8227" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":303,"completed":132,"skipped":1879,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:34:31.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 14:36:32.053: INFO: Deleting pod "var-expansion-1cd37bac-52ab-4ccd-b995-135c31b7f3bf" in namespace "var-expansion-8039" Oct 14 14:36:32.059: INFO: Wait up to 5m0s for pod "var-expansion-1cd37bac-52ab-4ccd-b995-135c31b7f3bf" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:36:36.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8039" for this suite. • [SLOW TEST:124.205 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":303,"completed":133,"skipped":1901,"failed":0} S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:36:36.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-8nt9 STEP: Creating a pod to test atomic-volume-subpath Oct 14 14:36:36.238: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-8nt9" in namespace "subpath-4036" to be "Succeeded or Failed" Oct 14 14:36:36.293: INFO: Pod "pod-subpath-test-configmap-8nt9": Phase="Pending", Reason="", readiness=false. Elapsed: 55.039905ms Oct 14 14:36:38.300: INFO: Pod "pod-subpath-test-configmap-8nt9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062606572s Oct 14 14:36:40.309: INFO: Pod "pod-subpath-test-configmap-8nt9": Phase="Running", Reason="", readiness=true. Elapsed: 4.071613996s Oct 14 14:36:42.318: INFO: Pod "pod-subpath-test-configmap-8nt9": Phase="Running", Reason="", readiness=true. Elapsed: 6.080644193s Oct 14 14:36:44.326: INFO: Pod "pod-subpath-test-configmap-8nt9": Phase="Running", Reason="", readiness=true. Elapsed: 8.088705969s Oct 14 14:36:46.335: INFO: Pod "pod-subpath-test-configmap-8nt9": Phase="Running", Reason="", readiness=true. Elapsed: 10.097349724s Oct 14 14:36:48.343: INFO: Pod "pod-subpath-test-configmap-8nt9": Phase="Running", Reason="", readiness=true. Elapsed: 12.105666509s Oct 14 14:36:50.353: INFO: Pod "pod-subpath-test-configmap-8nt9": Phase="Running", Reason="", readiness=true. Elapsed: 14.115024173s Oct 14 14:36:52.361: INFO: Pod "pod-subpath-test-configmap-8nt9": Phase="Running", Reason="", readiness=true. Elapsed: 16.12367858s Oct 14 14:36:54.370: INFO: Pod "pod-subpath-test-configmap-8nt9": Phase="Running", Reason="", readiness=true. Elapsed: 18.132635314s Oct 14 14:36:56.379: INFO: Pod "pod-subpath-test-configmap-8nt9": Phase="Running", Reason="", readiness=true. Elapsed: 20.141036226s Oct 14 14:36:58.388: INFO: Pod "pod-subpath-test-configmap-8nt9": Phase="Running", Reason="", readiness=true. Elapsed: 22.149924275s Oct 14 14:37:00.397: INFO: Pod "pod-subpath-test-configmap-8nt9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.159164786s STEP: Saw pod success Oct 14 14:37:00.397: INFO: Pod "pod-subpath-test-configmap-8nt9" satisfied condition "Succeeded or Failed" Oct 14 14:37:00.403: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-8nt9 container test-container-subpath-configmap-8nt9: STEP: delete the pod Oct 14 14:37:00.440: INFO: Waiting for pod pod-subpath-test-configmap-8nt9 to disappear Oct 14 14:37:00.465: INFO: Pod pod-subpath-test-configmap-8nt9 no longer exists STEP: Deleting pod pod-subpath-test-configmap-8nt9 Oct 14 14:37:00.465: INFO: Deleting pod "pod-subpath-test-configmap-8nt9" in namespace "subpath-4036" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:37:00.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4036" for this suite. • [SLOW TEST:24.386 seconds] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":303,"completed":134,"skipped":1902,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:37:00.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W1014 14:37:10.714976 11 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 14 14:38:12.742: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:38:12.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6558" for this suite. • [SLOW TEST:72.263 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":303,"completed":135,"skipped":1910,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:38:12.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:38:44.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1597" for this suite. STEP: Destroying namespace "nsdeletetest-8410" for this suite. Oct 14 14:38:44.164: INFO: Namespace nsdeletetest-8410 was already deleted STEP: Destroying namespace "nsdeletetest-2466" for this suite. • [SLOW TEST:31.409 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":303,"completed":136,"skipped":1915,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:38:44.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-7931 Oct 14 14:38:48.362: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-7931 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Oct 14 14:38:52.875: INFO: stderr: "I1014 14:38:52.730346 2515 log.go:181] (0x27a70a0) (0x27a7110) Create stream\nI1014 14:38:52.732618 2515 log.go:181] (0x27a70a0) (0x27a7110) Stream added, broadcasting: 1\nI1014 14:38:52.744690 2515 log.go:181] (0x27a70a0) Reply frame received for 1\nI1014 14:38:52.745427 2515 log.go:181] (0x27a70a0) (0x27a73b0) Create stream\nI1014 14:38:52.745521 2515 log.go:181] (0x27a70a0) (0x27a73b0) Stream added, broadcasting: 3\nI1014 14:38:52.747119 2515 log.go:181] (0x27a70a0) Reply frame received for 3\nI1014 14:38:52.747435 2515 log.go:181] (0x27a70a0) (0x27a75e0) Create stream\nI1014 14:38:52.747514 2515 log.go:181] (0x27a70a0) (0x27a75e0) Stream added, broadcasting: 5\nI1014 14:38:52.749028 2515 log.go:181] (0x27a70a0) Reply frame received for 5\nI1014 14:38:52.853223 2515 log.go:181] (0x27a70a0) Data frame received for 5\nI1014 14:38:52.853513 2515 log.go:181] (0x27a75e0) (5) Data frame handling\nI1014 14:38:52.854158 2515 log.go:181] (0x27a75e0) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI1014 14:38:52.857135 2515 log.go:181] (0x27a70a0) Data frame received for 3\nI1014 14:38:52.857337 2515 log.go:181] (0x27a73b0) (3) Data frame handling\nI1014 14:38:52.857504 2515 log.go:181] (0x27a73b0) (3) Data frame sent\nI1014 14:38:52.857651 2515 log.go:181] (0x27a70a0) Data frame received for 5\nI1014 14:38:52.857738 2515 log.go:181] (0x27a75e0) (5) Data frame handling\nI1014 14:38:52.857905 2515 log.go:181] (0x27a70a0) Data frame received for 3\nI1014 14:38:52.858023 2515 log.go:181] (0x27a73b0) (3) Data frame handling\nI1014 14:38:52.859817 2515 log.go:181] (0x27a70a0) Data frame received for 1\nI1014 14:38:52.859938 2515 log.go:181] (0x27a7110) (1) Data frame handling\nI1014 14:38:52.860074 2515 log.go:181] (0x27a7110) (1) Data frame sent\nI1014 14:38:52.860646 2515 log.go:181] (0x27a70a0) (0x27a7110) Stream removed, broadcasting: 1\nI1014 14:38:52.863246 2515 log.go:181] (0x27a70a0) Go away received\nI1014 14:38:52.865434 2515 log.go:181] (0x27a70a0) (0x27a7110) Stream removed, broadcasting: 1\nI1014 14:38:52.865861 2515 log.go:181] (0x27a70a0) (0x27a73b0) Stream removed, broadcasting: 3\nI1014 14:38:52.866145 2515 log.go:181] (0x27a70a0) (0x27a75e0) Stream removed, broadcasting: 5\n" Oct 14 14:38:52.876: INFO: stdout: "iptables" Oct 14 14:38:52.877: INFO: proxyMode: iptables Oct 14 14:38:52.886: INFO: Waiting for pod kube-proxy-mode-detector to disappear Oct 14 14:38:52.972: INFO: Pod kube-proxy-mode-detector still exists Oct 14 14:38:54.973: INFO: Waiting for pod kube-proxy-mode-detector to disappear Oct 14 14:38:54.979: INFO: Pod kube-proxy-mode-detector still exists Oct 14 14:38:56.973: INFO: Waiting for pod kube-proxy-mode-detector to disappear Oct 14 14:38:56.980: INFO: Pod kube-proxy-mode-detector still exists Oct 14 14:38:58.973: INFO: Waiting for pod kube-proxy-mode-detector to disappear Oct 14 14:38:58.979: INFO: Pod kube-proxy-mode-detector still exists Oct 14 14:39:00.973: INFO: Waiting for pod kube-proxy-mode-detector to disappear Oct 14 14:39:00.979: INFO: Pod kube-proxy-mode-detector still exists Oct 14 14:39:02.973: INFO: Waiting for pod kube-proxy-mode-detector to disappear Oct 14 14:39:02.980: INFO: Pod kube-proxy-mode-detector still exists Oct 14 14:39:04.973: INFO: Waiting for pod kube-proxy-mode-detector to disappear Oct 14 14:39:04.981: INFO: Pod kube-proxy-mode-detector still exists Oct 14 14:39:06.973: INFO: Waiting for pod kube-proxy-mode-detector to disappear Oct 14 14:39:06.979: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-7931 STEP: creating replication controller affinity-clusterip-timeout in namespace services-7931 I1014 14:39:07.091848 11 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-7931, replica count: 3 I1014 14:39:10.143366 11 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1014 14:39:13.144332 11 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 14 14:39:13.193: INFO: Creating new exec pod Oct 14 14:39:18.219: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-7931 execpod-affinitydmtqx -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Oct 14 14:39:19.717: INFO: stderr: "I1014 14:39:19.578880 2536 log.go:181] (0x2eac150) (0x2eac1c0) Create stream\nI1014 14:39:19.581932 2536 log.go:181] (0x2eac150) (0x2eac1c0) Stream added, broadcasting: 1\nI1014 14:39:19.593555 2536 log.go:181] (0x2eac150) Reply frame received for 1\nI1014 14:39:19.594277 2536 log.go:181] (0x2eac150) (0x2eac380) Create stream\nI1014 14:39:19.594528 2536 log.go:181] (0x2eac150) (0x2eac380) Stream added, broadcasting: 3\nI1014 14:39:19.596486 2536 log.go:181] (0x2eac150) Reply frame received for 3\nI1014 14:39:19.596941 2536 log.go:181] (0x2eac150) (0x2683730) Create stream\nI1014 14:39:19.597058 2536 log.go:181] (0x2eac150) (0x2683730) Stream added, broadcasting: 5\nI1014 14:39:19.598647 2536 log.go:181] (0x2eac150) Reply frame received for 5\nI1014 14:39:19.690337 2536 log.go:181] (0x2eac150) Data frame received for 5\nI1014 14:39:19.690613 2536 log.go:181] (0x2683730) (5) Data frame handling\nI1014 14:39:19.691029 2536 log.go:181] (0x2eac150) Data frame received for 3\n+ nc -zv -t -w 2 affinity-clusterip-timeout 80\nI1014 14:39:19.691364 2536 log.go:181] (0x2eac380) (3) Data frame handling\nI1014 14:39:19.691591 2536 log.go:181] (0x2683730) (5) Data frame sent\nI1014 14:39:19.692287 2536 log.go:181] (0x2eac150) Data frame received for 5\nI1014 14:39:19.692390 2536 log.go:181] (0x2683730) (5) Data frame handling\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\nI1014 14:39:19.692497 2536 log.go:181] (0x2eac150) Data frame received for 1\nI1014 14:39:19.692625 2536 log.go:181] (0x2eac1c0) (1) Data frame handling\nI1014 14:39:19.692700 2536 log.go:181] (0x2683730) (5) Data frame sent\nI1014 14:39:19.692773 2536 log.go:181] (0x2eac1c0) (1) Data frame sent\nI1014 14:39:19.693024 2536 log.go:181] (0x2eac150) Data frame received for 5\nI1014 14:39:19.693133 2536 log.go:181] (0x2683730) (5) Data frame handling\nI1014 14:39:19.693746 2536 log.go:181] (0x2eac150) (0x2eac1c0) Stream removed, broadcasting: 1\nI1014 14:39:19.696269 2536 log.go:181] (0x2eac150) Go away received\nI1014 14:39:19.707554 2536 log.go:181] (0x2eac150) (0x2eac1c0) Stream removed, broadcasting: 1\nI1014 14:39:19.708281 2536 log.go:181] (0x2eac150) (0x2eac380) Stream removed, broadcasting: 3\nI1014 14:39:19.708624 2536 log.go:181] (0x2eac150) (0x2683730) Stream removed, broadcasting: 5\n" Oct 14 14:39:19.717: INFO: stdout: "" Oct 14 14:39:19.720: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-7931 execpod-affinitydmtqx -- /bin/sh -x -c nc -zv -t -w 2 10.111.200.126 80' Oct 14 14:39:21.331: INFO: stderr: "I1014 14:39:21.207876 2556 log.go:181] (0x2f2c310) (0x2f2c380) Create stream\nI1014 14:39:21.210663 2556 log.go:181] (0x2f2c310) (0x2f2c380) Stream added, broadcasting: 1\nI1014 14:39:21.222480 2556 log.go:181] (0x2f2c310) Reply frame received for 1\nI1014 14:39:21.223816 2556 log.go:181] (0x2f2c310) (0x29e8070) Create stream\nI1014 14:39:21.224009 2556 log.go:181] (0x2f2c310) (0x29e8070) Stream added, broadcasting: 3\nI1014 14:39:21.226213 2556 log.go:181] (0x2f2c310) Reply frame received for 3\nI1014 14:39:21.226697 2556 log.go:181] (0x2f2c310) (0x2f2c5b0) Create stream\nI1014 14:39:21.226822 2556 log.go:181] (0x2f2c310) (0x2f2c5b0) Stream added, broadcasting: 5\nI1014 14:39:21.229040 2556 log.go:181] (0x2f2c310) Reply frame received for 5\nI1014 14:39:21.314151 2556 log.go:181] (0x2f2c310) Data frame received for 3\nI1014 14:39:21.314491 2556 log.go:181] (0x29e8070) (3) Data frame handling\nI1014 14:39:21.314774 2556 log.go:181] (0x2f2c310) Data frame received for 5\nI1014 14:39:21.315017 2556 log.go:181] (0x2f2c5b0) (5) Data frame handling\nI1014 14:39:21.315215 2556 log.go:181] (0x2f2c310) Data frame received for 1\nI1014 14:39:21.315369 2556 log.go:181] (0x2f2c380) (1) Data frame handling\nI1014 14:39:21.315947 2556 log.go:181] (0x2f2c380) (1) Data frame sent\nI1014 14:39:21.316431 2556 log.go:181] (0x2f2c5b0) (5) Data frame sent\nI1014 14:39:21.316532 2556 log.go:181] (0x2f2c310) Data frame received for 5\nI1014 14:39:21.316631 2556 log.go:181] (0x2f2c5b0) (5) Data frame handling\n+ nc -zv -t -w 2 10.111.200.126 80\nConnection to 10.111.200.126 80 port [tcp/http] succeeded!\nI1014 14:39:21.318838 2556 log.go:181] (0x2f2c310) (0x2f2c380) Stream removed, broadcasting: 1\nI1014 14:39:21.320782 2556 log.go:181] (0x2f2c310) Go away received\nI1014 14:39:21.322884 2556 log.go:181] (0x2f2c310) (0x2f2c380) Stream removed, broadcasting: 1\nI1014 14:39:21.323212 2556 log.go:181] (0x2f2c310) (0x29e8070) Stream removed, broadcasting: 3\nI1014 14:39:21.323442 2556 log.go:181] (0x2f2c310) (0x2f2c5b0) Stream removed, broadcasting: 5\n" Oct 14 14:39:21.333: INFO: stdout: "" Oct 14 14:39:21.334: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-7931 execpod-affinitydmtqx -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.111.200.126:80/ ; done' Oct 14 14:39:22.956: INFO: stderr: "I1014 14:39:22.750927 2577 log.go:181] (0x25ee000) (0x25ee070) Create stream\nI1014 14:39:22.755638 2577 log.go:181] (0x25ee000) (0x25ee070) Stream added, broadcasting: 1\nI1014 14:39:22.766259 2577 log.go:181] (0x25ee000) Reply frame received for 1\nI1014 14:39:22.766690 2577 log.go:181] (0x25ee000) (0x2eac770) Create stream\nI1014 14:39:22.766755 2577 log.go:181] (0x25ee000) (0x2eac770) Stream added, broadcasting: 3\nI1014 14:39:22.768144 2577 log.go:181] (0x25ee000) Reply frame received for 3\nI1014 14:39:22.768357 2577 log.go:181] (0x25ee000) (0x30281c0) Create stream\nI1014 14:39:22.768415 2577 log.go:181] (0x25ee000) (0x30281c0) Stream added, broadcasting: 5\nI1014 14:39:22.769704 2577 log.go:181] (0x25ee000) Reply frame received for 5\nI1014 14:39:22.848259 2577 log.go:181] (0x25ee000) Data frame received for 5\nI1014 14:39:22.849037 2577 log.go:181] (0x30281c0) (5) Data frame handling\nI1014 14:39:22.849841 2577 log.go:181] (0x25ee000) Data frame received for 3\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.200.126:80/\nI1014 14:39:22.850220 2577 log.go:181] (0x2eac770) (3) Data frame handling\nI1014 14:39:22.850433 2577 log.go:181] (0x2eac770) (3) Data frame sent\nI1014 14:39:22.850663 2577 log.go:181] (0x30281c0) (5) Data frame sent\nI1014 14:39:22.851524 2577 log.go:181] (0x25ee000) Data frame received for 3\nI1014 14:39:22.851641 2577 log.go:181] (0x2eac770) (3) Data frame handling\nI1014 14:39:22.851774 2577 log.go:181] (0x2eac770) (3) Data frame sent\nI1014 14:39:22.852239 2577 log.go:181] (0x25ee000) Data frame received for 3\nI1014 14:39:22.852336 2577 log.go:181] (0x2eac770) (3) Data frame handling\nI1014 14:39:22.852434 2577 log.go:181] (0x2eac770) (3) Data frame sent\nI1014 14:39:22.852515 2577 log.go:181] (0x25ee000) Data frame received for 5\nI1014 14:39:22.852613 2577 log.go:181] (0x30281c0) (5) Data frame handling\nI1014 14:39:22.852725 2577 log.go:181] (0x30281c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.200.126:80/\nI1014 14:39:22.856814 2577 log.go:181] (0x25ee000) Data frame received for 3\nI1014 14:39:22.857062 2577 log.go:181] (0x2eac770) (3) Data frame handling\nI1014 14:39:22.857218 2577 log.go:181] (0x2eac770) (3) Data frame sent\nI1014 14:39:22.857372 2577 log.go:181] (0x25ee000) Data frame received for 3\nI1014 14:39:22.857516 2577 log.go:181] (0x2eac770) (3) Data frame handling\nI1014 14:39:22.857636 2577 log.go:181] (0x25ee000) Data frame received for 5\nI1014 14:39:22.857830 2577 log.go:181] (0x30281c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.200.126:80/\nI1014 14:39:22.858027 2577 log.go:181] (0x2eac770) (3) Data frame sent\nI1014 14:39:22.858187 2577 log.go:181] (0x30281c0) (5) Data frame sent\nI1014 14:39:22.861287 2577 log.go:181] (0x25ee000) Data frame received for 3\nI1014 14:39:22.861405 2577 log.go:181] (0x2eac770) (3) Data frame handling\nI1014 14:39:22.861557 2577 log.go:181] (0x2eac770) (3) Data frame sent\nI1014 14:39:22.861757 2577 log.go:181] (0x25ee000) Data frame received for 3\nI1014 14:39:22.861867 2577 log.go:181] (0x2eac770) (3) Data frame handling\nI1014 14:39:22.861953 2577 log.go:181] (0x2eac770) (3) Data frame sent\nI1014 14:39:22.862040 2577 log.go:181] (0x25ee000) Data frame received for 5\nI1014 14:39:22.862122 2577 log.go:181] (0x30281c0) (5) Data frame handling\nI1014 14:39:22.862205 2577 log.go:181] (0x30281c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.200.126:80/\nI1014 14:39:22.865936 2577 log.go:181] (0x25ee000) Data frame received for 3\nI1014 14:39:22.866070 2577 log.go:181] (0x2eac770) (3) Data frame handling\nI1014 14:39:22.866244 2577 log.go:181] (0x2eac770) (3) Data frame sent\nI1014 14:39:22.866719 2577 log.go:181] (0x25ee000) Data frame received for 5\nI1014 14:39:22.866860 2577 log.go:181] (0x30281c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.200.126:80/\nI1014 14:39:22.866988 2577 log.go:181] (0x25ee000) Data frame received for 3\nI1014 14:39:22.867121 2577 log.go:181] (0x2eac770) (3) Data frame handling\nI1014 14:39:22.867261 2577 log.go:181] (0x30281c0) (5) Data frame sent\nI1014 14:39:22.867396 2577 log.go:181] (0x2eac770) (3) Data frame sent\nI1014 14:39:22.871018 2577 log.go:181] (0x25ee000) Data frame received for 3\nI1014 14:39:22.871117 2577 log.go:181] (0x2eac770) (3) Data frame handling\nI1014 14:39:22.871224 2577 log.go:181] (0x2eac770) (3) Data frame sent\nI1014 14:39:22.871619 2577 log.go:181] (0x25ee000) Data frame received for 5\nI1014 14:39:22.871727 2577 log.go:181] (0x30281c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.200.126:80/\nI1014 14:39:22.871832 2577 log.go:181] (0x25ee000) Data frame received for 3\nI1014 14:39:22.871979 2577 log.go:181] (0x2eac770) (3) Data frame handling\nI1014 14:39:22.872140 2577 log.go:181] (0x30281c0) (5) Data frame sent\nI1014 14:39:22.872236 2577 log.go:181] (0x2eac770) (3) Data frame sent\nI1014 14:39:22.875940 2577 log.go:181] (0x25ee000) Data frame received for 3\nI1014 14:39:22.876079 2577 log.go:181] (0x2eac770) (3) Data frame handling\nI1014 14:39:22.876214 2577 log.go:181] (0x2eac770) (3) Data frame sent\nI1014 14:39:22.876339 2577 log.go:181] (0x25ee000) Data frame received for 3\nI1014 14:39:22.876434 2577 log.go:181] (0x2eac770) (3) Data frame handling\nI1014 14:39:22.876536 2577 log.go:181] (0x25ee000) Data frame received for 5\nI1014 14:39:22.876662 2577 log.go:181] (0x30281c0) (5) Data frame handling\nI1014 14:39:22.876767 2577 log.go:181] (0x30281c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.200.126:80/\nI1014 14:39:22.876942 2577 log.go:181] (0x2eac770) (3) Data frame sent\nI1014 14:39:22.885777 2577 log.go:181] (0x25ee000) Data frame received for 5\nI1014 14:39:22.885869 2577 log.go:181] (0x30281c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.200.126:80/\nI1014 14:39:22.885949 2577 log.go:181] (0x25ee000) Data frame received for 3\nI1014 14:39:22.886069 2577 log.go:181] (0x2eac770) (3) Data frame handling\nI1014 14:39:22.886142 2577 log.go:181] (0x30281c0) (5) Data frame sent\nI1014 14:39:22.886219 2577 log.go:181] (0x2eac770) (3) Data frame sent\nI1014 14:39:22.886283 2577 log.go:181] (0x25ee000) Data frame received for 3\nI1014 14:39:22.886335 2577 log.go:181] (0x2eac770) (3) Data frame handling\nI1014 14:39:22.886412 2577 log.go:181] (0x2eac770) (3) Data frame sent\nI1014 14:39:22.889982 2577 log.go:181] (0x25ee000) Data frame received for 3\nI1014 14:39:22.890146 2577 log.go:181] (0x2eac770) (3) Data frame handling\nI1014 14:39:22.890334 2577 log.go:181] (0x2eac770) (3) Data frame sent\nI1014 14:39:22.893451 2577 log.go:181] (0x25ee000) Data frame received for 3\nI1014 14:39:22.893521 2577 log.go:181] (0x25ee000) Data frame received for 5\nI1014 14:39:22.893622 2577 log.go:181] (0x30281c0) (5) Data frame handling\nI1014 14:39:22.893700 2577 log.go:181] (0x30281c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.200.126:80/\nI1014 14:39:22.893776 2577 log.go:181] (0x2eac770) (3) Data frame handling\nI1014 14:39:22.893930 2577 log.go:181] (0x2eac770) (3) Data frame sent\nI1014 14:39:22.895056 2577 log.go:181] (0x25ee000) Data frame received for 3\nI1014 14:39:22.895147 2577 log.go:181] (0x2eac770) (3) Data frame handling\nI1014 14:39:22.895238 2577 log.go:181] (0x2eac770) (3) Data frame sent\nI1014 14:39:22.895803 2577 log.go:181] (0x25ee000) Data frame received for 5\nI1014 14:39:22.895880 2577 log.go:181] (0x25ee000) Data frame received for 3\nI1014 14:39:22.895970 2577 log.go:181] (0x2eac770) (3) Data frame handling\nI1014 14:39:22.896044 2577 log.go:181] (0x2eac770) (3) Data frame sent\nI1014 14:39:22.896105 2577 log.go:181] (0x30281c0) (5) Data frame handling\nI1014 14:39:22.896175 2577 log.go:181] (0x30281c0) (5) Data frame sent\n+ echo\nI1014 14:39:22.896241 2577 log.go:181] (0x25ee000) Data frame received for 5\nI1014 14:39:22.896297 2577 log.go:181] (0x30281c0) (5) Data frame handling\nI1014 14:39:22.896383 2577 log.go:181] (0x30281c0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.111.200.126:80/\nI1014 14:39:22.900594 2577 log.go:181] (0x25ee000) Data frame received for 3\nI1014 14:39:22.900675 2577 log.go:181] (0x2eac770) (3) Data frame handling\nI1014 14:39:22.900779 2577 log.go:181] (0x2eac770) (3) Data frame sent\nI1014 14:39:22.901417 2577 log.go:181] (0x25ee000) Data frame received for 5\nI1014 14:39:22.901498 2577 log.go:181] (0x30281c0) (5) Data frame handling\nI1014 14:39:22.901568 2577 log.go:181] (0x30281c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.200.126:80/\nI1014 14:39:22.901634 2577 log.go:181] (0x25ee000) Data frame received for 3\nI1014 14:39:22.901695 2577 log.go:181] (0x2eac770) (3) Data frame handling\nI1014 14:39:22.901771 2577 log.go:181] (0x2eac770) (3) Data frame sent\nI1014 14:39:22.904456 2577 log.go:181] (0x25ee000) Data frame received for 3\nI1014 14:39:22.904542 2577 log.go:181] (0x2eac770) (3) Data frame handling\nI1014 14:39:22.904623 2577 log.go:181] (0x2eac770) (3) Data frame sent\nI1014 14:39:22.905419 2577 log.go:181] (0x25ee000) Data frame received for 5\nI1014 14:39:22.905532 2577 log.go:181] (0x30281c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2I1014 14:39:22.905661 2577 log.go:181] (0x25ee000) Data frame received for 3\nI1014 14:39:22.905788 2577 log.go:181] (0x2eac770) (3) Data frame handling\nI1014 14:39:22.905901 2577 log.go:181] (0x30281c0) (5) Data frame sent\nI1014 14:39:22.906009 2577 log.go:181] (0x25ee000) Data frame received for 5\nI1014 14:39:22.906090 2577 log.go:181] (0x2eac770) (3) Data frame sent\nI1014 14:39:22.906215 2577 log.go:181] (0x30281c0) (5) Data frame handling\nI1014 14:39:22.906342 2577 log.go:181] (0x30281c0) (5) Data frame sent\n http://10.111.200.126:80/\nI1014 14:39:22.912020 2577 log.go:181] (0x25ee000) Data frame received for 3\nI1014 14:39:22.912185 2577 log.go:181] (0x2eac770) (3) Data frame handling\nI1014 14:39:22.912370 2577 log.go:181] (0x2eac770) (3) Data frame sent\nI1014 14:39:22.912575 2577 log.go:181] (0x25ee000) Data frame received for 5\nI1014 14:39:22.912708 2577 log.go:181] (0x30281c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.200.126:80/\nI1014 14:39:22.912804 2577 log.go:181] (0x25ee000) Data frame received for 3\nI1014 14:39:22.913027 2577 log.go:181] (0x2eac770) (3) Data frame handling\nI1014 14:39:22.913159 2577 log.go:181] (0x30281c0) (5) Data frame sent\nI1014 14:39:22.913267 2577 log.go:181] (0x2eac770) (3) Data frame sent\nI1014 14:39:22.918519 2577 log.go:181] (0x25ee000) Data frame received for 3\nI1014 14:39:22.918681 2577 log.go:181] (0x2eac770) (3) Data frame handling\nI1014 14:39:22.918854 2577 log.go:181] (0x25ee000) Data frame received for 5\nI1014 14:39:22.919054 2577 log.go:181] (0x30281c0) (5) Data frame handling\nI1014 14:39:22.919259 2577 log.go:181] (0x30281c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.200.126:80/\nI1014 14:39:22.919387 2577 log.go:181] (0x2eac770) (3) Data frame sent\nI1014 14:39:22.919490 2577 log.go:181] (0x25ee000) Data frame received for 3\nI1014 14:39:22.919583 2577 log.go:181] (0x2eac770) (3) Data frame handling\nI1014 14:39:22.919754 2577 log.go:181] (0x2eac770) (3) Data frame sent\nI1014 14:39:22.923788 2577 log.go:181] (0x25ee000) Data frame received for 3\nI1014 14:39:22.923939 2577 log.go:181] (0x2eac770) (3) Data frame handling\nI1014 14:39:22.924066 2577 log.go:181] (0x2eac770) (3) Data frame sent\nI1014 14:39:22.928763 2577 log.go:181] (0x25ee000) Data frame received for 3\nI1014 14:39:22.929028 2577 log.go:181] (0x2eac770) (3) Data frame handling\nI1014 14:39:22.929119 2577 log.go:181] (0x25ee000) Data frame received for 5\nI1014 14:39:22.929241 2577 log.go:181] (0x30281c0) (5) Data frame handling\nI1014 14:39:22.929336 2577 log.go:181] (0x30281c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.200.126:80/\nI1014 14:39:22.929408 2577 log.go:181] (0x2eac770) (3) Data frame sent\nI1014 14:39:22.931660 2577 log.go:181] (0x25ee000) Data frame received for 3\nI1014 14:39:22.931774 2577 log.go:181] (0x2eac770) (3) Data frame handling\nI1014 14:39:22.931912 2577 log.go:181] (0x2eac770) (3) Data frame sent\nI1014 14:39:22.932332 2577 log.go:181] (0x25ee000) Data frame received for 5\nI1014 14:39:22.932444 2577 log.go:181] (0x30281c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.200.126:80/\nI1014 14:39:22.932519 2577 log.go:181] (0x25ee000) Data frame received for 3\nI1014 14:39:22.932599 2577 log.go:181] (0x2eac770) (3) Data frame handling\nI1014 14:39:22.932678 2577 log.go:181] (0x30281c0) (5) Data frame sent\nI1014 14:39:22.932777 2577 log.go:181] (0x2eac770) (3) Data frame sent\nI1014 14:39:22.937652 2577 log.go:181] (0x25ee000) Data frame received for 3\nI1014 14:39:22.937776 2577 log.go:181] (0x2eac770) (3) Data frame handling\nI1014 14:39:22.937918 2577 log.go:181] (0x2eac770) (3) Data frame sent\nI1014 14:39:22.938788 2577 log.go:181] (0x25ee000) Data frame received for 3\nI1014 14:39:22.938925 2577 log.go:181] (0x2eac770) (3) Data frame handling\nI1014 14:39:22.939079 2577 log.go:181] (0x25ee000) Data frame received for 5\nI1014 14:39:22.939185 2577 log.go:181] (0x30281c0) (5) Data frame handling\nI1014 14:39:22.941633 2577 log.go:181] (0x25ee000) Data frame received for 1\nI1014 14:39:22.941781 2577 log.go:181] (0x25ee070) (1) Data frame handling\nI1014 14:39:22.941910 2577 log.go:181] (0x25ee070) (1) Data frame sent\nI1014 14:39:22.942582 2577 log.go:181] (0x25ee000) (0x25ee070) Stream removed, broadcasting: 1\nI1014 14:39:22.945241 2577 log.go:181] (0x25ee000) Go away received\nI1014 14:39:22.947458 2577 log.go:181] (0x25ee000) (0x25ee070) Stream removed, broadcasting: 1\nI1014 14:39:22.947728 2577 log.go:181] (0x25ee000) (0x2eac770) Stream removed, broadcasting: 3\nI1014 14:39:22.947946 2577 log.go:181] (0x25ee000) (0x30281c0) Stream removed, broadcasting: 5\n" Oct 14 14:39:22.962: INFO: stdout: "\naffinity-clusterip-timeout-xdv8n\naffinity-clusterip-timeout-xdv8n\naffinity-clusterip-timeout-xdv8n\naffinity-clusterip-timeout-xdv8n\naffinity-clusterip-timeout-xdv8n\naffinity-clusterip-timeout-xdv8n\naffinity-clusterip-timeout-xdv8n\naffinity-clusterip-timeout-xdv8n\naffinity-clusterip-timeout-xdv8n\naffinity-clusterip-timeout-xdv8n\naffinity-clusterip-timeout-xdv8n\naffinity-clusterip-timeout-xdv8n\naffinity-clusterip-timeout-xdv8n\naffinity-clusterip-timeout-xdv8n\naffinity-clusterip-timeout-xdv8n\naffinity-clusterip-timeout-xdv8n" Oct 14 14:39:22.962: INFO: Received response from host: affinity-clusterip-timeout-xdv8n Oct 14 14:39:22.962: INFO: Received response from host: affinity-clusterip-timeout-xdv8n Oct 14 14:39:22.962: INFO: Received response from host: affinity-clusterip-timeout-xdv8n Oct 14 14:39:22.962: INFO: Received response from host: affinity-clusterip-timeout-xdv8n Oct 14 14:39:22.963: INFO: Received response from host: affinity-clusterip-timeout-xdv8n Oct 14 14:39:22.963: INFO: Received response from host: affinity-clusterip-timeout-xdv8n Oct 14 14:39:22.963: INFO: Received response from host: affinity-clusterip-timeout-xdv8n Oct 14 14:39:22.963: INFO: Received response from host: affinity-clusterip-timeout-xdv8n Oct 14 14:39:22.963: INFO: Received response from host: affinity-clusterip-timeout-xdv8n Oct 14 14:39:22.963: INFO: Received response from host: affinity-clusterip-timeout-xdv8n Oct 14 14:39:22.963: INFO: Received response from host: affinity-clusterip-timeout-xdv8n Oct 14 14:39:22.963: INFO: Received response from host: affinity-clusterip-timeout-xdv8n Oct 14 14:39:22.963: INFO: Received response from host: affinity-clusterip-timeout-xdv8n Oct 14 14:39:22.963: INFO: Received response from host: affinity-clusterip-timeout-xdv8n Oct 14 14:39:22.963: INFO: Received response from host: affinity-clusterip-timeout-xdv8n Oct 14 14:39:22.963: INFO: Received response from host: affinity-clusterip-timeout-xdv8n Oct 14 14:39:22.963: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-7931 execpod-affinitydmtqx -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.111.200.126:80/' Oct 14 14:39:24.409: INFO: stderr: "I1014 14:39:24.300820 2597 log.go:181] (0x26d6000) (0x26d64d0) Create stream\nI1014 14:39:24.303605 2597 log.go:181] (0x26d6000) (0x26d64d0) Stream added, broadcasting: 1\nI1014 14:39:24.310754 2597 log.go:181] (0x26d6000) Reply frame received for 1\nI1014 14:39:24.311185 2597 log.go:181] (0x26d6000) (0x2802460) Create stream\nI1014 14:39:24.311243 2597 log.go:181] (0x26d6000) (0x2802460) Stream added, broadcasting: 3\nI1014 14:39:24.312585 2597 log.go:181] (0x26d6000) Reply frame received for 3\nI1014 14:39:24.312990 2597 log.go:181] (0x26d6000) (0x2ccc070) Create stream\nI1014 14:39:24.313084 2597 log.go:181] (0x26d6000) (0x2ccc070) Stream added, broadcasting: 5\nI1014 14:39:24.314358 2597 log.go:181] (0x26d6000) Reply frame received for 5\nI1014 14:39:24.370117 2597 log.go:181] (0x26d6000) Data frame received for 5\nI1014 14:39:24.370331 2597 log.go:181] (0x2ccc070) (5) Data frame handling\n+ curl -q -s --connect-timeout 2 http://10.111.200.126:80/\nI1014 14:39:24.374104 2597 log.go:181] (0x2ccc070) (5) Data frame sent\nI1014 14:39:24.389993 2597 log.go:181] (0x26d6000) Data frame received for 3\nI1014 14:39:24.390432 2597 log.go:181] (0x26d6000) Data frame received for 5\nI1014 14:39:24.390565 2597 log.go:181] (0x2802460) (3) Data frame handling\nI1014 14:39:24.391951 2597 log.go:181] (0x2ccc070) (5) Data frame handling\nI1014 14:39:24.394664 2597 log.go:181] (0x2802460) (3) Data frame sent\nI1014 14:39:24.396956 2597 log.go:181] (0x26d6000) Data frame received for 3\nI1014 14:39:24.397059 2597 log.go:181] (0x2802460) (3) Data frame handling\nI1014 14:39:24.397152 2597 log.go:181] (0x26d6000) Data frame received for 1\nI1014 14:39:24.397246 2597 log.go:181] (0x26d64d0) (1) Data frame handling\nI1014 14:39:24.397310 2597 log.go:181] (0x26d64d0) (1) Data frame sent\nI1014 14:39:24.397840 2597 log.go:181] (0x26d6000) (0x26d64d0) Stream removed, broadcasting: 1\nI1014 14:39:24.399163 2597 log.go:181] (0x26d6000) Go away received\nI1014 14:39:24.401483 2597 log.go:181] (0x26d6000) (0x26d64d0) Stream removed, broadcasting: 1\nI1014 14:39:24.401621 2597 log.go:181] (0x26d6000) (0x2802460) Stream removed, broadcasting: 3\nI1014 14:39:24.401729 2597 log.go:181] (0x26d6000) (0x2ccc070) Stream removed, broadcasting: 5\n" Oct 14 14:39:24.409: INFO: stdout: "affinity-clusterip-timeout-xdv8n" Oct 14 14:39:39.411: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-7931 execpod-affinitydmtqx -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.111.200.126:80/' Oct 14 14:39:40.909: INFO: stderr: "I1014 14:39:40.779714 2617 log.go:181] (0x247aa80) (0x247ad90) Create stream\nI1014 14:39:40.782956 2617 log.go:181] (0x247aa80) (0x247ad90) Stream added, broadcasting: 1\nI1014 14:39:40.811210 2617 log.go:181] (0x247aa80) Reply frame received for 1\nI1014 14:39:40.812739 2617 log.go:181] (0x247aa80) (0x2a3a070) Create stream\nI1014 14:39:40.813026 2617 log.go:181] (0x247aa80) (0x2a3a070) Stream added, broadcasting: 3\nI1014 14:39:40.818753 2617 log.go:181] (0x247aa80) Reply frame received for 3\nI1014 14:39:40.820565 2617 log.go:181] (0x247aa80) (0x30b0000) Create stream\nI1014 14:39:40.820677 2617 log.go:181] (0x247aa80) (0x30b0000) Stream added, broadcasting: 5\nI1014 14:39:40.822297 2617 log.go:181] (0x247aa80) Reply frame received for 5\nI1014 14:39:40.887778 2617 log.go:181] (0x247aa80) Data frame received for 5\nI1014 14:39:40.888069 2617 log.go:181] (0x30b0000) (5) Data frame handling\nI1014 14:39:40.888639 2617 log.go:181] (0x30b0000) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.111.200.126:80/\nI1014 14:39:40.890838 2617 log.go:181] (0x247aa80) Data frame received for 3\nI1014 14:39:40.890971 2617 log.go:181] (0x2a3a070) (3) Data frame handling\nI1014 14:39:40.891138 2617 log.go:181] (0x2a3a070) (3) Data frame sent\nI1014 14:39:40.891447 2617 log.go:181] (0x247aa80) Data frame received for 3\nI1014 14:39:40.891612 2617 log.go:181] (0x2a3a070) (3) Data frame handling\nI1014 14:39:40.891843 2617 log.go:181] (0x247aa80) Data frame received for 5\nI1014 14:39:40.891947 2617 log.go:181] (0x30b0000) (5) Data frame handling\nI1014 14:39:40.893319 2617 log.go:181] (0x247aa80) Data frame received for 1\nI1014 14:39:40.893408 2617 log.go:181] (0x247ad90) (1) Data frame handling\nI1014 14:39:40.893518 2617 log.go:181] (0x247ad90) (1) Data frame sent\nI1014 14:39:40.894071 2617 log.go:181] (0x247aa80) (0x247ad90) Stream removed, broadcasting: 1\nI1014 14:39:40.896829 2617 log.go:181] (0x247aa80) Go away received\nI1014 14:39:40.899121 2617 log.go:181] (0x247aa80) (0x247ad90) Stream removed, broadcasting: 1\nI1014 14:39:40.899572 2617 log.go:181] (0x247aa80) (0x2a3a070) Stream removed, broadcasting: 3\nI1014 14:39:40.900025 2617 log.go:181] (0x247aa80) (0x30b0000) Stream removed, broadcasting: 5\n" Oct 14 14:39:40.909: INFO: stdout: "affinity-clusterip-timeout-cdmt5" Oct 14 14:39:40.910: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-7931, will wait for the garbage collector to delete the pods Oct 14 14:39:41.279: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 213.057462ms Oct 14 14:39:41.680: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 401.013705ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:39:55.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7931" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:71.735 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":303,"completed":137,"skipped":1955,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:39:55.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Oct 14 14:39:56.040: INFO: Waiting up to 5m0s for pod "downward-api-914b5bce-0632-4e10-820c-dffbfd7a57d8" in namespace "downward-api-8095" to be "Succeeded or Failed" Oct 14 14:39:56.058: INFO: Pod "downward-api-914b5bce-0632-4e10-820c-dffbfd7a57d8": Phase="Pending", Reason="", readiness=false. Elapsed: 17.621217ms Oct 14 14:39:58.066: INFO: Pod "downward-api-914b5bce-0632-4e10-820c-dffbfd7a57d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025656382s Oct 14 14:40:00.074: INFO: Pod "downward-api-914b5bce-0632-4e10-820c-dffbfd7a57d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034002363s STEP: Saw pod success Oct 14 14:40:00.075: INFO: Pod "downward-api-914b5bce-0632-4e10-820c-dffbfd7a57d8" satisfied condition "Succeeded or Failed" Oct 14 14:40:00.080: INFO: Trying to get logs from node latest-worker pod downward-api-914b5bce-0632-4e10-820c-dffbfd7a57d8 container dapi-container: STEP: delete the pod Oct 14 14:40:00.130: INFO: Waiting for pod downward-api-914b5bce-0632-4e10-820c-dffbfd7a57d8 to disappear Oct 14 14:40:00.296: INFO: Pod downward-api-914b5bce-0632-4e10-820c-dffbfd7a57d8 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:40:00.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8095" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":303,"completed":138,"skipped":1976,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:40:00.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 14:40:00.576: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Oct 14 14:40:00.656: INFO: Pod name sample-pod: Found 0 pods out of 1 Oct 14 14:40:05.664: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Oct 14 14:40:05.665: INFO: Creating deployment "test-rolling-update-deployment" Oct 14 14:40:05.674: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Oct 14 14:40:05.732: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Oct 14 14:40:07.848: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Oct 14 14:40:07.853: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738283205, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738283205, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738283205, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738283205, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-c4cb8d6d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 14 14:40:09.861: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Oct 14 14:40:09.879: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-1236 /apis/apps/v1/namespaces/deployment-1236/deployments/test-rolling-update-deployment fff6dd6d-adb6-4510-815c-9d4eac1c96bb 1144646 1 2020-10-14 14:40:05 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2020-10-14 14:40:05 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-10-14 14:40:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xa79d1e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-10-14 14:40:05 +0000 UTC,LastTransitionTime:2020-10-14 14:40:05 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-c4cb8d6d9" has successfully progressed.,LastUpdateTime:2020-10-14 14:40:09 +0000 UTC,LastTransitionTime:2020-10-14 14:40:05 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Oct 14 14:40:09.890: INFO: New ReplicaSet "test-rolling-update-deployment-c4cb8d6d9" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-c4cb8d6d9 deployment-1236 /apis/apps/v1/namespaces/deployment-1236/replicasets/test-rolling-update-deployment-c4cb8d6d9 d6693cf8-bef8-43b9-af20-e5d832857958 1144635 1 2020-10-14 14:40:05 +0000 UTC map[name:sample-pod pod-template-hash:c4cb8d6d9] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment fff6dd6d-adb6-4510-815c-9d4eac1c96bb 0xa79d6f0 0xa79d6f1}] [] [{kube-controller-manager Update apps/v1 2020-10-14 14:40:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fff6dd6d-adb6-4510-815c-9d4eac1c96bb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: c4cb8d6d9,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:c4cb8d6d9] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xa79d768 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Oct 14 14:40:09.891: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Oct 14 14:40:09.892: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-1236 /apis/apps/v1/namespaces/deployment-1236/replicasets/test-rolling-update-controller 70fd6745-456a-42cf-a7b9-6c5f1dbdf56a 1144645 2 2020-10-14 14:40:00 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment fff6dd6d-adb6-4510-815c-9d4eac1c96bb 0xa79d5e7 0xa79d5e8}] [] [{e2e.test Update apps/v1 2020-10-14 14:40:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-10-14 14:40:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fff6dd6d-adb6-4510-815c-9d4eac1c96bb\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xa79d688 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Oct 14 14:40:09.901: INFO: Pod "test-rolling-update-deployment-c4cb8d6d9-l5twn" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-c4cb8d6d9-l5twn test-rolling-update-deployment-c4cb8d6d9- deployment-1236 /api/v1/namespaces/deployment-1236/pods/test-rolling-update-deployment-c4cb8d6d9-l5twn 1760f932-981a-4d39-b3cd-1620cef76f3c 1144634 0 2020-10-14 14:40:05 +0000 UTC map[name:sample-pod pod-template-hash:c4cb8d6d9] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-c4cb8d6d9 d6693cf8-bef8-43b9-af20-e5d832857958 0xa79dbe0 0xa79dbe1}] [] [{kube-controller-manager Update v1 2020-10-14 14:40:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d6693cf8-bef8-43b9-af20-e5d832857958\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-14 14:40:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.58\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bw6qr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bw6qr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bw6qr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 14:40:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 14:40:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 14:40:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 14:40:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.2.58,StartTime:2020-10-14 14:40:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-14 14:40:08 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://51a83f260dc78fad6f060969a1896c8602b4df82bb21ca45c01db11170327476,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.58,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:40:09.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1236" for this suite. • [SLOW TEST:9.530 seconds] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":303,"completed":139,"skipped":2022,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] version v1 /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:40:09.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-qpmmf in namespace proxy-839 I1014 14:40:10.244371 11 runners.go:190] Created replication controller with name: proxy-service-qpmmf, namespace: proxy-839, replica count: 1 I1014 14:40:11.295932 11 runners.go:190] proxy-service-qpmmf Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1014 14:40:12.296738 11 runners.go:190] proxy-service-qpmmf Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1014 14:40:13.297954 11 runners.go:190] proxy-service-qpmmf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1014 14:40:14.298815 11 runners.go:190] proxy-service-qpmmf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1014 14:40:15.299898 11 runners.go:190] proxy-service-qpmmf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1014 14:40:16.300754 11 runners.go:190] proxy-service-qpmmf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1014 14:40:17.301573 11 runners.go:190] proxy-service-qpmmf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1014 14:40:18.302481 11 runners.go:190] proxy-service-qpmmf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1014 14:40:19.303255 11 runners.go:190] proxy-service-qpmmf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1014 14:40:20.304148 11 runners.go:190] proxy-service-qpmmf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1014 14:40:21.305104 11 runners.go:190] proxy-service-qpmmf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1014 14:40:22.305983 11 runners.go:190] proxy-service-qpmmf Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 14 14:40:22.317: INFO: setup took 12.124453632s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Oct 14 14:40:22.330: INFO: (0) /api/v1/namespaces/proxy-839/pods/http:proxy-service-qpmmf-xq7xk:160/proxy/: foo (200; 11.455492ms) Oct 14 14:40:22.330: INFO: (0) /api/v1/namespaces/proxy-839/pods/http:proxy-service-qpmmf-xq7xk:1080/proxy/: t... (200; 11.973296ms) Oct 14 14:40:22.330: INFO: (0) /api/v1/namespaces/proxy-839/services/proxy-service-qpmmf:portname2/proxy/: bar (200; 12.037401ms) Oct 14 14:40:22.331: INFO: (0) /api/v1/namespaces/proxy-839/pods/proxy-service-qpmmf-xq7xk:1080/proxy/: testtest (200; 16.163325ms) Oct 14 14:40:22.335: INFO: (0) /api/v1/namespaces/proxy-839/pods/https:proxy-service-qpmmf-xq7xk:462/proxy/: tls qux (200; 17.404738ms) Oct 14 14:40:22.336: INFO: (0) /api/v1/namespaces/proxy-839/services/https:proxy-service-qpmmf:tlsportname2/proxy/: tls qux (200; 16.431077ms) Oct 14 14:40:22.337: INFO: (0) /api/v1/namespaces/proxy-839/services/https:proxy-service-qpmmf:tlsportname1/proxy/: tls baz (200; 18.803095ms) Oct 14 14:40:22.337: INFO: (0) /api/v1/namespaces/proxy-839/pods/https:proxy-service-qpmmf-xq7xk:460/proxy/: tls baz (200; 17.869671ms) Oct 14 14:40:22.342: INFO: (1) /api/v1/namespaces/proxy-839/pods/proxy-service-qpmmf-xq7xk/proxy/: test (200; 4.905529ms) Oct 14 14:40:22.342: INFO: (1) /api/v1/namespaces/proxy-839/pods/https:proxy-service-qpmmf-xq7xk:460/proxy/: tls baz (200; 5.182924ms) Oct 14 14:40:22.343: INFO: (1) /api/v1/namespaces/proxy-839/pods/https:proxy-service-qpmmf-xq7xk:462/proxy/: tls qux (200; 4.95729ms) Oct 14 14:40:22.343: INFO: (1) /api/v1/namespaces/proxy-839/pods/proxy-service-qpmmf-xq7xk:160/proxy/: foo (200; 5.442279ms) Oct 14 14:40:22.343: INFO: (1) /api/v1/namespaces/proxy-839/services/proxy-service-qpmmf:portname1/proxy/: foo (200; 5.705717ms) Oct 14 14:40:22.343: INFO: (1) /api/v1/namespaces/proxy-839/pods/http:proxy-service-qpmmf-xq7xk:160/proxy/: foo (200; 5.859272ms) Oct 14 14:40:22.343: INFO: (1) /api/v1/namespaces/proxy-839/services/http:proxy-service-qpmmf:portname1/proxy/: foo (200; 5.775538ms) Oct 14 14:40:22.343: INFO: (1) /api/v1/namespaces/proxy-839/pods/proxy-service-qpmmf-xq7xk:1080/proxy/: testt... (200; 5.994916ms) Oct 14 14:40:22.344: INFO: (1) /api/v1/namespaces/proxy-839/services/https:proxy-service-qpmmf:tlsportname1/proxy/: tls baz (200; 7.049785ms) Oct 14 14:40:22.344: INFO: (1) /api/v1/namespaces/proxy-839/services/https:proxy-service-qpmmf:tlsportname2/proxy/: tls qux (200; 6.933567ms) Oct 14 14:40:22.345: INFO: (1) /api/v1/namespaces/proxy-839/services/http:proxy-service-qpmmf:portname2/proxy/: bar (200; 6.748003ms) Oct 14 14:40:22.344: INFO: (1) /api/v1/namespaces/proxy-839/pods/proxy-service-qpmmf-xq7xk:162/proxy/: bar (200; 5.939448ms) Oct 14 14:40:22.345: INFO: (1) /api/v1/namespaces/proxy-839/services/proxy-service-qpmmf:portname2/proxy/: bar (200; 7.186061ms) Oct 14 14:40:22.345: INFO: (1) /api/v1/namespaces/proxy-839/pods/http:proxy-service-qpmmf-xq7xk:162/proxy/: bar (200; 6.94387ms) Oct 14 14:40:22.345: INFO: (1) /api/v1/namespaces/proxy-839/pods/https:proxy-service-qpmmf-xq7xk:443/proxy/: test (200; 4.761552ms) Oct 14 14:40:22.351: INFO: (2) /api/v1/namespaces/proxy-839/pods/proxy-service-qpmmf-xq7xk:160/proxy/: foo (200; 5.483245ms) Oct 14 14:40:22.351: INFO: (2) /api/v1/namespaces/proxy-839/services/http:proxy-service-qpmmf:portname1/proxy/: foo (200; 5.805721ms) Oct 14 14:40:22.351: INFO: (2) /api/v1/namespaces/proxy-839/pods/https:proxy-service-qpmmf-xq7xk:443/proxy/: t... (200; 6.153529ms) Oct 14 14:40:22.352: INFO: (2) /api/v1/namespaces/proxy-839/services/proxy-service-qpmmf:portname2/proxy/: bar (200; 6.44579ms) Oct 14 14:40:22.352: INFO: (2) /api/v1/namespaces/proxy-839/pods/proxy-service-qpmmf-xq7xk:1080/proxy/: testt... (200; 30.96559ms) Oct 14 14:40:22.387: INFO: (3) /api/v1/namespaces/proxy-839/pods/proxy-service-qpmmf-xq7xk:1080/proxy/: testtest (200; 32.687082ms) Oct 14 14:40:22.390: INFO: (3) /api/v1/namespaces/proxy-839/pods/https:proxy-service-qpmmf-xq7xk:460/proxy/: tls baz (200; 34.870395ms) Oct 14 14:40:22.391: INFO: (3) /api/v1/namespaces/proxy-839/services/https:proxy-service-qpmmf:tlsportname2/proxy/: tls qux (200; 36.102067ms) Oct 14 14:40:22.391: INFO: (3) /api/v1/namespaces/proxy-839/services/https:proxy-service-qpmmf:tlsportname1/proxy/: tls baz (200; 35.890954ms) Oct 14 14:40:22.391: INFO: (3) /api/v1/namespaces/proxy-839/services/proxy-service-qpmmf:portname2/proxy/: bar (200; 36.474647ms) Oct 14 14:40:22.392: INFO: (3) /api/v1/namespaces/proxy-839/services/http:proxy-service-qpmmf:portname2/proxy/: bar (200; 37.219442ms) Oct 14 14:40:22.392: INFO: (3) /api/v1/namespaces/proxy-839/pods/https:proxy-service-qpmmf-xq7xk:443/proxy/: test (200; 7.959219ms) Oct 14 14:40:22.401: INFO: (4) /api/v1/namespaces/proxy-839/services/proxy-service-qpmmf:portname2/proxy/: bar (200; 8.322963ms) Oct 14 14:40:22.401: INFO: (4) /api/v1/namespaces/proxy-839/services/http:proxy-service-qpmmf:portname2/proxy/: bar (200; 8.380854ms) Oct 14 14:40:22.401: INFO: (4) /api/v1/namespaces/proxy-839/services/http:proxy-service-qpmmf:portname1/proxy/: foo (200; 7.902905ms) Oct 14 14:40:22.401: INFO: (4) /api/v1/namespaces/proxy-839/services/https:proxy-service-qpmmf:tlsportname1/proxy/: tls baz (200; 8.398436ms) Oct 14 14:40:22.402: INFO: (4) /api/v1/namespaces/proxy-839/pods/proxy-service-qpmmf-xq7xk:1080/proxy/: testt... (200; 8.720449ms) Oct 14 14:40:22.406: INFO: (5) /api/v1/namespaces/proxy-839/pods/http:proxy-service-qpmmf-xq7xk:1080/proxy/: t... (200; 3.920626ms) Oct 14 14:40:22.407: INFO: (5) /api/v1/namespaces/proxy-839/pods/http:proxy-service-qpmmf-xq7xk:160/proxy/: foo (200; 3.810289ms) Oct 14 14:40:22.407: INFO: (5) /api/v1/namespaces/proxy-839/pods/https:proxy-service-qpmmf-xq7xk:460/proxy/: tls baz (200; 3.822124ms) Oct 14 14:40:22.407: INFO: (5) /api/v1/namespaces/proxy-839/services/https:proxy-service-qpmmf:tlsportname2/proxy/: tls qux (200; 4.452482ms) Oct 14 14:40:22.408: INFO: (5) /api/v1/namespaces/proxy-839/pods/proxy-service-qpmmf-xq7xk:1080/proxy/: testtest (200; 5.645636ms) Oct 14 14:40:22.410: INFO: (5) /api/v1/namespaces/proxy-839/pods/https:proxy-service-qpmmf-xq7xk:462/proxy/: tls qux (200; 5.857227ms) Oct 14 14:40:22.410: INFO: (5) /api/v1/namespaces/proxy-839/pods/https:proxy-service-qpmmf-xq7xk:443/proxy/: testtest (200; 4.839489ms) Oct 14 14:40:22.416: INFO: (6) /api/v1/namespaces/proxy-839/pods/proxy-service-qpmmf-xq7xk:160/proxy/: foo (200; 5.572745ms) Oct 14 14:40:22.417: INFO: (6) /api/v1/namespaces/proxy-839/services/proxy-service-qpmmf:portname1/proxy/: foo (200; 6.585552ms) Oct 14 14:40:22.417: INFO: (6) /api/v1/namespaces/proxy-839/pods/https:proxy-service-qpmmf-xq7xk:460/proxy/: tls baz (200; 6.549535ms) Oct 14 14:40:22.417: INFO: (6) /api/v1/namespaces/proxy-839/pods/http:proxy-service-qpmmf-xq7xk:162/proxy/: bar (200; 6.594475ms) Oct 14 14:40:22.417: INFO: (6) /api/v1/namespaces/proxy-839/pods/http:proxy-service-qpmmf-xq7xk:1080/proxy/: t... (200; 6.751125ms) Oct 14 14:40:22.418: INFO: (6) /api/v1/namespaces/proxy-839/pods/proxy-service-qpmmf-xq7xk:162/proxy/: bar (200; 6.867982ms) Oct 14 14:40:22.418: INFO: (6) /api/v1/namespaces/proxy-839/services/proxy-service-qpmmf:portname2/proxy/: bar (200; 7.146002ms) Oct 14 14:40:22.418: INFO: (6) /api/v1/namespaces/proxy-839/pods/http:proxy-service-qpmmf-xq7xk:160/proxy/: foo (200; 7.465909ms) Oct 14 14:40:22.418: INFO: (6) /api/v1/namespaces/proxy-839/pods/https:proxy-service-qpmmf-xq7xk:443/proxy/: t... (200; 15.803299ms) Oct 14 14:40:22.436: INFO: (7) /api/v1/namespaces/proxy-839/pods/https:proxy-service-qpmmf-xq7xk:462/proxy/: tls qux (200; 16.084495ms) Oct 14 14:40:22.436: INFO: (7) /api/v1/namespaces/proxy-839/pods/https:proxy-service-qpmmf-xq7xk:460/proxy/: tls baz (200; 16.14863ms) Oct 14 14:40:22.436: INFO: (7) /api/v1/namespaces/proxy-839/pods/proxy-service-qpmmf-xq7xk/proxy/: test (200; 16.028835ms) Oct 14 14:40:22.436: INFO: (7) /api/v1/namespaces/proxy-839/pods/proxy-service-qpmmf-xq7xk:160/proxy/: foo (200; 16.801024ms) Oct 14 14:40:22.437: INFO: (7) /api/v1/namespaces/proxy-839/pods/proxy-service-qpmmf-xq7xk:1080/proxy/: testtestt... (200; 5.110136ms) Oct 14 14:40:22.444: INFO: (8) /api/v1/namespaces/proxy-839/pods/proxy-service-qpmmf-xq7xk:160/proxy/: foo (200; 5.076136ms) Oct 14 14:40:22.444: INFO: (8) /api/v1/namespaces/proxy-839/pods/proxy-service-qpmmf-xq7xk/proxy/: test (200; 5.234803ms) Oct 14 14:40:22.444: INFO: (8) /api/v1/namespaces/proxy-839/pods/http:proxy-service-qpmmf-xq7xk:162/proxy/: bar (200; 5.335536ms) Oct 14 14:40:22.444: INFO: (8) /api/v1/namespaces/proxy-839/pods/https:proxy-service-qpmmf-xq7xk:462/proxy/: tls qux (200; 5.640646ms) Oct 14 14:40:22.445: INFO: (8) /api/v1/namespaces/proxy-839/pods/https:proxy-service-qpmmf-xq7xk:443/proxy/: testtest (200; 5.802201ms) Oct 14 14:40:22.453: INFO: (9) /api/v1/namespaces/proxy-839/pods/http:proxy-service-qpmmf-xq7xk:1080/proxy/: t... (200; 5.770637ms) Oct 14 14:40:22.453: INFO: (9) /api/v1/namespaces/proxy-839/pods/https:proxy-service-qpmmf-xq7xk:443/proxy/: test (200; 4.042192ms) Oct 14 14:40:22.461: INFO: (10) /api/v1/namespaces/proxy-839/services/proxy-service-qpmmf:portname2/proxy/: bar (200; 4.678383ms) Oct 14 14:40:22.462: INFO: (10) /api/v1/namespaces/proxy-839/pods/http:proxy-service-qpmmf-xq7xk:160/proxy/: foo (200; 4.787721ms) Oct 14 14:40:22.462: INFO: (10) /api/v1/namespaces/proxy-839/pods/https:proxy-service-qpmmf-xq7xk:443/proxy/: t... (200; 8.498143ms) Oct 14 14:40:22.466: INFO: (10) /api/v1/namespaces/proxy-839/pods/proxy-service-qpmmf-xq7xk:160/proxy/: foo (200; 9.2522ms) Oct 14 14:40:22.466: INFO: (10) /api/v1/namespaces/proxy-839/pods/proxy-service-qpmmf-xq7xk:162/proxy/: bar (200; 8.826738ms) Oct 14 14:40:22.467: INFO: (10) /api/v1/namespaces/proxy-839/pods/proxy-service-qpmmf-xq7xk:1080/proxy/: testt... (200; 4.672588ms) Oct 14 14:40:22.473: INFO: (11) /api/v1/namespaces/proxy-839/services/proxy-service-qpmmf:portname2/proxy/: bar (200; 4.571343ms) Oct 14 14:40:22.473: INFO: (11) /api/v1/namespaces/proxy-839/services/proxy-service-qpmmf:portname1/proxy/: foo (200; 5.357952ms) Oct 14 14:40:22.474: INFO: (11) /api/v1/namespaces/proxy-839/services/http:proxy-service-qpmmf:portname1/proxy/: foo (200; 5.696764ms) Oct 14 14:40:22.474: INFO: (11) /api/v1/namespaces/proxy-839/pods/proxy-service-qpmmf-xq7xk:160/proxy/: foo (200; 5.534296ms) Oct 14 14:40:22.474: INFO: (11) /api/v1/namespaces/proxy-839/pods/proxy-service-qpmmf-xq7xk:1080/proxy/: testtest (200; 6.539011ms) Oct 14 14:40:22.476: INFO: (11) /api/v1/namespaces/proxy-839/pods/http:proxy-service-qpmmf-xq7xk:160/proxy/: foo (200; 7.070373ms) Oct 14 14:40:22.476: INFO: (11) /api/v1/namespaces/proxy-839/pods/proxy-service-qpmmf-xq7xk:162/proxy/: bar (200; 7.531502ms) Oct 14 14:40:22.476: INFO: (11) /api/v1/namespaces/proxy-839/services/http:proxy-service-qpmmf:portname2/proxy/: bar (200; 7.88309ms) Oct 14 14:40:22.476: INFO: (11) /api/v1/namespaces/proxy-839/pods/https:proxy-service-qpmmf-xq7xk:462/proxy/: tls qux (200; 7.820113ms) Oct 14 14:40:22.477: INFO: (11) /api/v1/namespaces/proxy-839/services/https:proxy-service-qpmmf:tlsportname2/proxy/: tls qux (200; 8.216313ms) Oct 14 14:40:22.477: INFO: (11) /api/v1/namespaces/proxy-839/pods/https:proxy-service-qpmmf-xq7xk:443/proxy/: test (200; 3.69766ms) Oct 14 14:40:22.482: INFO: (12) /api/v1/namespaces/proxy-839/pods/http:proxy-service-qpmmf-xq7xk:1080/proxy/: t... (200; 4.537383ms) Oct 14 14:40:22.482: INFO: (12) /api/v1/namespaces/proxy-839/pods/proxy-service-qpmmf-xq7xk:162/proxy/: bar (200; 4.864844ms) Oct 14 14:40:22.482: INFO: (12) /api/v1/namespaces/proxy-839/services/proxy-service-qpmmf:portname2/proxy/: bar (200; 5.196672ms) Oct 14 14:40:22.482: INFO: (12) /api/v1/namespaces/proxy-839/pods/proxy-service-qpmmf-xq7xk:1080/proxy/: testtest (200; 8.896234ms) Oct 14 14:40:22.522: INFO: (13) /api/v1/namespaces/proxy-839/pods/https:proxy-service-qpmmf-xq7xk:462/proxy/: tls qux (200; 9.292054ms) Oct 14 14:40:22.522: INFO: (13) /api/v1/namespaces/proxy-839/pods/proxy-service-qpmmf-xq7xk:160/proxy/: foo (200; 10.079556ms) Oct 14 14:40:22.522: INFO: (13) /api/v1/namespaces/proxy-839/pods/proxy-service-qpmmf-xq7xk:1080/proxy/: testt... (200; 10.559486ms) Oct 14 14:40:22.524: INFO: (13) /api/v1/namespaces/proxy-839/pods/http:proxy-service-qpmmf-xq7xk:162/proxy/: bar (200; 11.31172ms) Oct 14 14:40:22.524: INFO: (13) /api/v1/namespaces/proxy-839/services/https:proxy-service-qpmmf:tlsportname2/proxy/: tls qux (200; 11.1793ms) Oct 14 14:40:22.524: INFO: (13) /api/v1/namespaces/proxy-839/services/http:proxy-service-qpmmf:portname1/proxy/: foo (200; 11.346966ms) Oct 14 14:40:22.529: INFO: (14) /api/v1/namespaces/proxy-839/pods/http:proxy-service-qpmmf-xq7xk:1080/proxy/: t... (200; 4.322886ms) Oct 14 14:40:22.529: INFO: (14) /api/v1/namespaces/proxy-839/pods/proxy-service-qpmmf-xq7xk/proxy/: test (200; 4.109553ms) Oct 14 14:40:22.529: INFO: (14) /api/v1/namespaces/proxy-839/services/http:proxy-service-qpmmf:portname1/proxy/: foo (200; 4.638883ms) Oct 14 14:40:22.529: INFO: (14) /api/v1/namespaces/proxy-839/pods/http:proxy-service-qpmmf-xq7xk:162/proxy/: bar (200; 4.643241ms) Oct 14 14:40:22.530: INFO: (14) /api/v1/namespaces/proxy-839/pods/http:proxy-service-qpmmf-xq7xk:160/proxy/: foo (200; 5.961861ms) Oct 14 14:40:22.531: INFO: (14) /api/v1/namespaces/proxy-839/pods/https:proxy-service-qpmmf-xq7xk:462/proxy/: tls qux (200; 6.313495ms) Oct 14 14:40:22.531: INFO: (14) /api/v1/namespaces/proxy-839/pods/proxy-service-qpmmf-xq7xk:160/proxy/: foo (200; 6.388859ms) Oct 14 14:40:22.531: INFO: (14) /api/v1/namespaces/proxy-839/pods/proxy-service-qpmmf-xq7xk:1080/proxy/: testtest (200; 3.867764ms) Oct 14 14:40:22.536: INFO: (15) /api/v1/namespaces/proxy-839/pods/http:proxy-service-qpmmf-xq7xk:160/proxy/: foo (200; 3.724943ms) Oct 14 14:40:22.537: INFO: (15) /api/v1/namespaces/proxy-839/pods/https:proxy-service-qpmmf-xq7xk:443/proxy/: t... (200; 7.72633ms) Oct 14 14:40:22.541: INFO: (15) /api/v1/namespaces/proxy-839/pods/proxy-service-qpmmf-xq7xk:1080/proxy/: testt... (200; 2.997021ms) Oct 14 14:40:22.545: INFO: (16) /api/v1/namespaces/proxy-839/pods/proxy-service-qpmmf-xq7xk:162/proxy/: bar (200; 3.616274ms) Oct 14 14:40:22.545: INFO: (16) /api/v1/namespaces/proxy-839/pods/proxy-service-qpmmf-xq7xk/proxy/: test (200; 3.81671ms) Oct 14 14:40:22.547: INFO: (16) /api/v1/namespaces/proxy-839/services/proxy-service-qpmmf:portname2/proxy/: bar (200; 5.365047ms) Oct 14 14:40:22.547: INFO: (16) /api/v1/namespaces/proxy-839/services/proxy-service-qpmmf:portname1/proxy/: foo (200; 6.061208ms) Oct 14 14:40:22.548: INFO: (16) /api/v1/namespaces/proxy-839/pods/https:proxy-service-qpmmf-xq7xk:462/proxy/: tls qux (200; 6.258071ms) Oct 14 14:40:22.548: INFO: (16) /api/v1/namespaces/proxy-839/services/http:proxy-service-qpmmf:portname2/proxy/: bar (200; 6.27163ms) Oct 14 14:40:22.548: INFO: (16) /api/v1/namespaces/proxy-839/pods/proxy-service-qpmmf-xq7xk:1080/proxy/: testtestt... (200; 5.420303ms) Oct 14 14:40:22.556: INFO: (17) /api/v1/namespaces/proxy-839/pods/http:proxy-service-qpmmf-xq7xk:160/proxy/: foo (200; 6.070698ms) Oct 14 14:40:22.556: INFO: (17) /api/v1/namespaces/proxy-839/pods/https:proxy-service-qpmmf-xq7xk:462/proxy/: tls qux (200; 6.305716ms) Oct 14 14:40:22.557: INFO: (17) /api/v1/namespaces/proxy-839/services/http:proxy-service-qpmmf:portname1/proxy/: foo (200; 6.375812ms) Oct 14 14:40:22.557: INFO: (17) /api/v1/namespaces/proxy-839/pods/proxy-service-qpmmf-xq7xk:162/proxy/: bar (200; 6.502696ms) Oct 14 14:40:22.557: INFO: (17) /api/v1/namespaces/proxy-839/services/https:proxy-service-qpmmf:tlsportname2/proxy/: tls qux (200; 6.892861ms) Oct 14 14:40:22.557: INFO: (17) /api/v1/namespaces/proxy-839/pods/https:proxy-service-qpmmf-xq7xk:443/proxy/: test (200; 6.98006ms) Oct 14 14:40:22.558: INFO: (17) /api/v1/namespaces/proxy-839/services/proxy-service-qpmmf:portname2/proxy/: bar (200; 7.625885ms) Oct 14 14:40:22.558: INFO: (17) /api/v1/namespaces/proxy-839/services/http:proxy-service-qpmmf:portname2/proxy/: bar (200; 8.339892ms) Oct 14 14:40:22.558: INFO: (17) /api/v1/namespaces/proxy-839/services/proxy-service-qpmmf:portname1/proxy/: foo (200; 8.218888ms) Oct 14 14:40:22.562: INFO: (18) /api/v1/namespaces/proxy-839/pods/https:proxy-service-qpmmf-xq7xk:462/proxy/: tls qux (200; 3.825846ms) Oct 14 14:40:22.563: INFO: (18) /api/v1/namespaces/proxy-839/pods/proxy-service-qpmmf-xq7xk:160/proxy/: foo (200; 4.255854ms) Oct 14 14:40:22.563: INFO: (18) /api/v1/namespaces/proxy-839/pods/http:proxy-service-qpmmf-xq7xk:160/proxy/: foo (200; 4.741662ms) Oct 14 14:40:22.563: INFO: (18) /api/v1/namespaces/proxy-839/services/proxy-service-qpmmf:portname2/proxy/: bar (200; 4.800288ms) Oct 14 14:40:22.564: INFO: (18) /api/v1/namespaces/proxy-839/pods/http:proxy-service-qpmmf-xq7xk:162/proxy/: bar (200; 5.311921ms) Oct 14 14:40:22.564: INFO: (18) /api/v1/namespaces/proxy-839/services/http:proxy-service-qpmmf:portname2/proxy/: bar (200; 5.762282ms) Oct 14 14:40:22.565: INFO: (18) /api/v1/namespaces/proxy-839/services/https:proxy-service-qpmmf:tlsportname2/proxy/: tls qux (200; 6.091815ms) Oct 14 14:40:22.565: INFO: (18) /api/v1/namespaces/proxy-839/pods/http:proxy-service-qpmmf-xq7xk:1080/proxy/: t... (200; 6.607498ms) Oct 14 14:40:22.565: INFO: (18) /api/v1/namespaces/proxy-839/services/http:proxy-service-qpmmf:portname1/proxy/: foo (200; 6.70849ms) Oct 14 14:40:22.566: INFO: (18) /api/v1/namespaces/proxy-839/services/https:proxy-service-qpmmf:tlsportname1/proxy/: tls baz (200; 6.992838ms) Oct 14 14:40:22.566: INFO: (18) /api/v1/namespaces/proxy-839/pods/proxy-service-qpmmf-xq7xk/proxy/: test (200; 7.145244ms) Oct 14 14:40:22.566: INFO: (18) /api/v1/namespaces/proxy-839/pods/proxy-service-qpmmf-xq7xk:162/proxy/: bar (200; 7.264885ms) Oct 14 14:40:22.566: INFO: (18) /api/v1/namespaces/proxy-839/services/proxy-service-qpmmf:portname1/proxy/: foo (200; 7.519732ms) Oct 14 14:40:22.566: INFO: (18) /api/v1/namespaces/proxy-839/pods/proxy-service-qpmmf-xq7xk:1080/proxy/: testt... (200; 6.188465ms) Oct 14 14:40:22.574: INFO: (19) /api/v1/namespaces/proxy-839/pods/https:proxy-service-qpmmf-xq7xk:443/proxy/: test (200; 12.612367ms) Oct 14 14:40:22.580: INFO: (19) /api/v1/namespaces/proxy-839/services/http:proxy-service-qpmmf:portname2/proxy/: bar (200; 13.008899ms) Oct 14 14:40:22.580: INFO: (19) /api/v1/namespaces/proxy-839/pods/proxy-service-qpmmf-xq7xk:162/proxy/: bar (200; 12.82047ms) Oct 14 14:40:22.580: INFO: (19) /api/v1/namespaces/proxy-839/services/proxy-service-qpmmf:portname2/proxy/: bar (200; 12.920936ms) Oct 14 14:40:22.580: INFO: (19) /api/v1/namespaces/proxy-839/pods/http:proxy-service-qpmmf-xq7xk:160/proxy/: foo (200; 13.028732ms) Oct 14 14:40:22.580: INFO: (19) /api/v1/namespaces/proxy-839/pods/proxy-service-qpmmf-xq7xk:1080/proxy/: test>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 14 14:40:35.759: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dca24ef1-6f9e-4543-86bc-c3a248895a57" in namespace "projected-9467" to be "Succeeded or Failed" Oct 14 14:40:35.763: INFO: Pod "downwardapi-volume-dca24ef1-6f9e-4543-86bc-c3a248895a57": Phase="Pending", Reason="", readiness=false. Elapsed: 3.865793ms Oct 14 14:40:37.772: INFO: Pod "downwardapi-volume-dca24ef1-6f9e-4543-86bc-c3a248895a57": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012092411s Oct 14 14:40:39.779: INFO: Pod "downwardapi-volume-dca24ef1-6f9e-4543-86bc-c3a248895a57": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019833493s STEP: Saw pod success Oct 14 14:40:39.780: INFO: Pod "downwardapi-volume-dca24ef1-6f9e-4543-86bc-c3a248895a57" satisfied condition "Succeeded or Failed" Oct 14 14:40:39.784: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-dca24ef1-6f9e-4543-86bc-c3a248895a57 container client-container: STEP: delete the pod Oct 14 14:40:39.820: INFO: Waiting for pod downwardapi-volume-dca24ef1-6f9e-4543-86bc-c3a248895a57 to disappear Oct 14 14:40:39.834: INFO: Pod downwardapi-volume-dca24ef1-6f9e-4543-86bc-c3a248895a57 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:40:39.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9467" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":141,"skipped":2074,"failed":0} ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:40:39.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 14 14:40:47.765: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 14 14:40:49.784: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738283247, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738283247, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738283247, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738283247, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 14 14:40:52.879: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:40:53.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8483" for this suite. STEP: Destroying namespace "webhook-8483-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.377 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":303,"completed":142,"skipped":2074,"failed":0} SS ------------------------------ [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:40:53.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename certificates STEP: Waiting for a default service account to be provisioned in namespace [It] should support CSR API operations [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/certificates.k8s.io STEP: getting /apis/certificates.k8s.io/v1 STEP: creating STEP: getting STEP: listing STEP: watching Oct 14 14:41:05.148: INFO: starting watch STEP: patching STEP: updating Oct 14 14:41:05.169: INFO: waiting for watch events with expected annotations Oct 14 14:41:05.170: INFO: saw patched and updated annotations STEP: getting /approval STEP: patching /approval STEP: updating /approval STEP: getting /status STEP: patching /status STEP: updating /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:41:05.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "certificates-8458" for this suite. • [SLOW TEST:12.183 seconds] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should support CSR API operations [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":303,"completed":143,"skipped":2076,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:41:05.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8478.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-8478.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8478.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8478.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-8478.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-8478.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-8478.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-8478.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8478.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8478.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-8478.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8478.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-8478.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-8478.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-8478.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-8478.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-8478.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8478.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 14 14:41:11.557: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8478.svc.cluster.local from pod dns-8478/dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164: the server could not find the requested resource (get pods dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164) Oct 14 14:41:11.563: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8478.svc.cluster.local from pod dns-8478/dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164: the server could not find the requested resource (get pods dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164) Oct 14 14:41:11.567: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8478.svc.cluster.local from pod dns-8478/dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164: the server could not find the requested resource (get pods dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164) Oct 14 14:41:11.571: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8478.svc.cluster.local from pod dns-8478/dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164: the server could not find the requested resource (get pods dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164) Oct 14 14:41:11.581: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8478.svc.cluster.local from pod dns-8478/dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164: the server could not find the requested resource (get pods dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164) Oct 14 14:41:11.585: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8478.svc.cluster.local from pod dns-8478/dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164: the server could not find the requested resource (get pods dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164) Oct 14 14:41:11.589: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8478.svc.cluster.local from pod dns-8478/dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164: the server could not find the requested resource (get pods dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164) Oct 14 14:41:11.594: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8478.svc.cluster.local from pod dns-8478/dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164: the server could not find the requested resource (get pods dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164) Oct 14 14:41:11.603: INFO: Lookups using dns-8478/dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8478.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8478.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8478.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8478.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8478.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8478.svc.cluster.local jessie_udp@dns-test-service-2.dns-8478.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8478.svc.cluster.local] Oct 14 14:41:16.611: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8478.svc.cluster.local from pod dns-8478/dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164: the server could not find the requested resource (get pods dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164) Oct 14 14:41:16.616: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8478.svc.cluster.local from pod dns-8478/dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164: the server could not find the requested resource (get pods dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164) Oct 14 14:41:16.621: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8478.svc.cluster.local from pod dns-8478/dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164: the server could not find the requested resource (get pods dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164) Oct 14 14:41:16.626: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8478.svc.cluster.local from pod dns-8478/dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164: the server could not find the requested resource (get pods dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164) Oct 14 14:41:16.639: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8478.svc.cluster.local from pod dns-8478/dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164: the server could not find the requested resource (get pods dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164) Oct 14 14:41:16.643: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8478.svc.cluster.local from pod dns-8478/dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164: the server could not find the requested resource (get pods dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164) Oct 14 14:41:16.647: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8478.svc.cluster.local from pod dns-8478/dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164: the server could not find the requested resource (get pods dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164) Oct 14 14:41:16.651: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8478.svc.cluster.local from pod dns-8478/dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164: the server could not find the requested resource (get pods dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164) Oct 14 14:41:16.658: INFO: Lookups using dns-8478/dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8478.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8478.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8478.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8478.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8478.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8478.svc.cluster.local jessie_udp@dns-test-service-2.dns-8478.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8478.svc.cluster.local] Oct 14 14:41:21.610: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8478.svc.cluster.local from pod dns-8478/dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164: the server could not find the requested resource (get pods dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164) Oct 14 14:41:21.615: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8478.svc.cluster.local from pod dns-8478/dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164: the server could not find the requested resource (get pods dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164) Oct 14 14:41:21.620: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8478.svc.cluster.local from pod dns-8478/dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164: the server could not find the requested resource (get pods dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164) Oct 14 14:41:21.624: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8478.svc.cluster.local from pod dns-8478/dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164: the server could not find the requested resource (get pods dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164) Oct 14 14:41:21.637: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8478.svc.cluster.local from pod dns-8478/dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164: the server could not find the requested resource (get pods dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164) Oct 14 14:41:21.642: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8478.svc.cluster.local from pod dns-8478/dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164: the server could not find the requested resource (get pods dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164) Oct 14 14:41:21.647: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8478.svc.cluster.local from pod dns-8478/dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164: the server could not find the requested resource (get pods dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164) Oct 14 14:41:21.650: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8478.svc.cluster.local from pod dns-8478/dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164: the server could not find the requested resource (get pods dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164) Oct 14 14:41:21.658: INFO: Lookups using dns-8478/dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8478.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8478.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8478.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8478.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8478.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8478.svc.cluster.local jessie_udp@dns-test-service-2.dns-8478.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8478.svc.cluster.local] Oct 14 14:41:26.614: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8478.svc.cluster.local from pod dns-8478/dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164: the server could not find the requested resource (get pods dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164) Oct 14 14:41:26.620: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8478.svc.cluster.local from pod dns-8478/dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164: the server could not find the requested resource (get pods dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164) Oct 14 14:41:26.624: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8478.svc.cluster.local from pod dns-8478/dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164: the server could not find the requested resource (get pods dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164) Oct 14 14:41:26.627: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8478.svc.cluster.local from pod dns-8478/dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164: the server could not find the requested resource (get pods dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164) Oct 14 14:41:26.640: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8478.svc.cluster.local from pod dns-8478/dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164: the server could not find the requested resource (get pods dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164) Oct 14 14:41:26.645: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8478.svc.cluster.local from pod dns-8478/dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164: the server could not find the requested resource (get pods dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164) Oct 14 14:41:26.649: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8478.svc.cluster.local from pod dns-8478/dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164: the server could not find the requested resource (get pods dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164) Oct 14 14:41:26.653: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8478.svc.cluster.local from pod dns-8478/dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164: the server could not find the requested resource (get pods dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164) Oct 14 14:41:26.660: INFO: Lookups using dns-8478/dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8478.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8478.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8478.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8478.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8478.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8478.svc.cluster.local jessie_udp@dns-test-service-2.dns-8478.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8478.svc.cluster.local] Oct 14 14:41:31.611: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8478.svc.cluster.local from pod dns-8478/dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164: the server could not find the requested resource (get pods dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164) Oct 14 14:41:31.617: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8478.svc.cluster.local from pod dns-8478/dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164: the server could not find the requested resource (get pods dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164) Oct 14 14:41:31.621: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8478.svc.cluster.local from pod dns-8478/dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164: the server could not find the requested resource (get pods dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164) Oct 14 14:41:31.625: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8478.svc.cluster.local from pod dns-8478/dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164: the server could not find the requested resource (get pods dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164) Oct 14 14:41:31.637: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8478.svc.cluster.local from pod dns-8478/dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164: the server could not find the requested resource (get pods dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164) Oct 14 14:41:31.641: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8478.svc.cluster.local from pod dns-8478/dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164: the server could not find the requested resource (get pods dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164) Oct 14 14:41:31.645: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8478.svc.cluster.local from pod dns-8478/dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164: the server could not find the requested resource (get pods dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164) Oct 14 14:41:31.650: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8478.svc.cluster.local from pod dns-8478/dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164: the server could not find the requested resource (get pods dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164) Oct 14 14:41:31.660: INFO: Lookups using dns-8478/dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8478.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8478.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8478.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8478.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8478.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8478.svc.cluster.local jessie_udp@dns-test-service-2.dns-8478.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8478.svc.cluster.local] Oct 14 14:41:36.626: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8478.svc.cluster.local from pod dns-8478/dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164: the server could not find the requested resource (get pods dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164) Oct 14 14:41:36.633: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8478.svc.cluster.local from pod dns-8478/dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164: the server could not find the requested resource (get pods dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164) Oct 14 14:41:36.648: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8478.svc.cluster.local from pod dns-8478/dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164: the server could not find the requested resource (get pods dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164) Oct 14 14:41:36.652: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8478.svc.cluster.local from pod dns-8478/dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164: the server could not find the requested resource (get pods dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164) Oct 14 14:41:36.662: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8478.svc.cluster.local from pod dns-8478/dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164: the server could not find the requested resource (get pods dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164) Oct 14 14:41:36.666: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8478.svc.cluster.local from pod dns-8478/dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164: the server could not find the requested resource (get pods dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164) Oct 14 14:41:36.669: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8478.svc.cluster.local from pod dns-8478/dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164: the server could not find the requested resource (get pods dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164) Oct 14 14:41:36.673: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8478.svc.cluster.local from pod dns-8478/dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164: the server could not find the requested resource (get pods dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164) Oct 14 14:41:36.681: INFO: Lookups using dns-8478/dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8478.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8478.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8478.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8478.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8478.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8478.svc.cluster.local jessie_udp@dns-test-service-2.dns-8478.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8478.svc.cluster.local] Oct 14 14:41:41.658: INFO: DNS probes using dns-8478/dns-test-afb46ee4-75a5-4a78-b14d-fdf0a0023164 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:41:41.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8478" for this suite. • [SLOW TEST:36.384 seconds] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":303,"completed":144,"skipped":2091,"failed":0} SSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:41:41.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-4667 STEP: creating a selector STEP: Creating the service pods in kubernetes Oct 14 14:41:42.332: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Oct 14 14:41:42.495: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 14 14:41:44.549: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 14 14:41:46.502: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 14 14:41:48.505: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 14 14:41:50.504: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 14 14:41:52.504: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 14 14:41:54.502: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 14 14:41:56.501: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 14 14:41:58.504: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 14 14:42:00.503: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 14 14:42:02.502: INFO: The status of Pod netserver-0 is Running (Ready = true) Oct 14 14:42:02.511: INFO: The status of Pod netserver-1 is Running (Ready = false) Oct 14 14:42:04.520: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Oct 14 14:42:08.583: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.65:8080/dial?request=hostname&protocol=udp&host=10.244.2.64&port=8081&tries=1'] Namespace:pod-network-test-4667 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 14 14:42:08.583: INFO: >>> kubeConfig: /root/.kube/config I1014 14:42:08.694460 11 log.go:181] (0x92d0cb0) (0x92d0e70) Create stream I1014 14:42:08.694697 11 log.go:181] (0x92d0cb0) (0x92d0e70) Stream added, broadcasting: 1 I1014 14:42:08.699409 11 log.go:181] (0x92d0cb0) Reply frame received for 1 I1014 14:42:08.699606 11 log.go:181] (0x92d0cb0) (0xa980380) Create stream I1014 14:42:08.699684 11 log.go:181] (0x92d0cb0) (0xa980380) Stream added, broadcasting: 3 I1014 14:42:08.701123 11 log.go:181] (0x92d0cb0) Reply frame received for 3 I1014 14:42:08.701259 11 log.go:181] (0x92d0cb0) (0xa980700) Create stream I1014 14:42:08.701334 11 log.go:181] (0x92d0cb0) (0xa980700) Stream added, broadcasting: 5 I1014 14:42:08.702831 11 log.go:181] (0x92d0cb0) Reply frame received for 5 I1014 14:42:08.789239 11 log.go:181] (0x92d0cb0) Data frame received for 3 I1014 14:42:08.789403 11 log.go:181] (0xa980380) (3) Data frame handling I1014 14:42:08.789507 11 log.go:181] (0x92d0cb0) Data frame received for 5 I1014 14:42:08.789643 11 log.go:181] (0xa980700) (5) Data frame handling I1014 14:42:08.789771 11 log.go:181] (0xa980380) (3) Data frame sent I1014 14:42:08.789856 11 log.go:181] (0x92d0cb0) Data frame received for 3 I1014 14:42:08.789933 11 log.go:181] (0xa980380) (3) Data frame handling I1014 14:42:08.792422 11 log.go:181] (0x92d0cb0) Data frame received for 1 I1014 14:42:08.792531 11 log.go:181] (0x92d0e70) (1) Data frame handling I1014 14:42:08.792684 11 log.go:181] (0x92d0e70) (1) Data frame sent I1014 14:42:08.792914 11 log.go:181] (0x92d0cb0) (0x92d0e70) Stream removed, broadcasting: 1 I1014 14:42:08.793051 11 log.go:181] (0x92d0cb0) Go away received I1014 14:42:08.793444 11 log.go:181] (0x92d0cb0) (0x92d0e70) Stream removed, broadcasting: 1 I1014 14:42:08.793569 11 log.go:181] (0x92d0cb0) (0xa980380) Stream removed, broadcasting: 3 I1014 14:42:08.793667 11 log.go:181] (0x92d0cb0) (0xa980700) Stream removed, broadcasting: 5 Oct 14 14:42:08.793: INFO: Waiting for responses: map[] Oct 14 14:42:08.798: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.65:8080/dial?request=hostname&protocol=udp&host=10.244.1.183&port=8081&tries=1'] Namespace:pod-network-test-4667 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 14 14:42:08.798: INFO: >>> kubeConfig: /root/.kube/config I1014 14:42:08.907596 11 log.go:181] (0x92d12d0) (0x92d13b0) Create stream I1014 14:42:08.907748 11 log.go:181] (0x92d12d0) (0x92d13b0) Stream added, broadcasting: 1 I1014 14:42:08.911310 11 log.go:181] (0x92d12d0) Reply frame received for 1 I1014 14:42:08.911483 11 log.go:181] (0x92d12d0) (0x92d1730) Create stream I1014 14:42:08.911556 11 log.go:181] (0x92d12d0) (0x92d1730) Stream added, broadcasting: 3 I1014 14:42:08.913058 11 log.go:181] (0x92d12d0) Reply frame received for 3 I1014 14:42:08.913299 11 log.go:181] (0x92d12d0) (0x8e53960) Create stream I1014 14:42:08.913414 11 log.go:181] (0x92d12d0) (0x8e53960) Stream added, broadcasting: 5 I1014 14:42:08.914900 11 log.go:181] (0x92d12d0) Reply frame received for 5 I1014 14:42:08.998450 11 log.go:181] (0x92d12d0) Data frame received for 3 I1014 14:42:08.998651 11 log.go:181] (0x92d1730) (3) Data frame handling I1014 14:42:08.998827 11 log.go:181] (0x92d1730) (3) Data frame sent I1014 14:42:08.998973 11 log.go:181] (0x92d12d0) Data frame received for 3 I1014 14:42:08.999101 11 log.go:181] (0x92d1730) (3) Data frame handling I1014 14:42:08.999315 11 log.go:181] (0x92d12d0) Data frame received for 5 I1014 14:42:08.999519 11 log.go:181] (0x8e53960) (5) Data frame handling I1014 14:42:09.001182 11 log.go:181] (0x92d12d0) Data frame received for 1 I1014 14:42:09.001370 11 log.go:181] (0x92d13b0) (1) Data frame handling I1014 14:42:09.001563 11 log.go:181] (0x92d13b0) (1) Data frame sent I1014 14:42:09.001684 11 log.go:181] (0x92d12d0) (0x92d13b0) Stream removed, broadcasting: 1 I1014 14:42:09.001835 11 log.go:181] (0x92d12d0) Go away received I1014 14:42:09.002477 11 log.go:181] (0x92d12d0) (0x92d13b0) Stream removed, broadcasting: 1 I1014 14:42:09.002643 11 log.go:181] (0x92d12d0) (0x92d1730) Stream removed, broadcasting: 3 I1014 14:42:09.002767 11 log.go:181] (0x92d12d0) (0x8e53960) Stream removed, broadcasting: 5 Oct 14 14:42:09.003: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:42:09.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4667" for this suite. • [SLOW TEST:27.223 seconds] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":303,"completed":145,"skipped":2099,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:42:09.021: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name projected-secret-test-66c3f7cf-6d8d-48c5-a0a4-30314bfeffca STEP: Creating a pod to test consume secrets Oct 14 14:42:09.135: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b1548adc-e173-4103-a4b5-6c3bdcda9c7c" in namespace "projected-9324" to be "Succeeded or Failed" Oct 14 14:42:09.158: INFO: Pod "pod-projected-secrets-b1548adc-e173-4103-a4b5-6c3bdcda9c7c": Phase="Pending", Reason="", readiness=false. Elapsed: 22.7858ms Oct 14 14:42:11.201: INFO: Pod "pod-projected-secrets-b1548adc-e173-4103-a4b5-6c3bdcda9c7c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066685771s Oct 14 14:42:13.210: INFO: Pod "pod-projected-secrets-b1548adc-e173-4103-a4b5-6c3bdcda9c7c": Phase="Running", Reason="", readiness=true. Elapsed: 4.074776043s Oct 14 14:42:15.429: INFO: Pod "pod-projected-secrets-b1548adc-e173-4103-a4b5-6c3bdcda9c7c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.294533268s STEP: Saw pod success Oct 14 14:42:15.430: INFO: Pod "pod-projected-secrets-b1548adc-e173-4103-a4b5-6c3bdcda9c7c" satisfied condition "Succeeded or Failed" Oct 14 14:42:15.508: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-b1548adc-e173-4103-a4b5-6c3bdcda9c7c container secret-volume-test: STEP: delete the pod Oct 14 14:42:16.120: INFO: Waiting for pod pod-projected-secrets-b1548adc-e173-4103-a4b5-6c3bdcda9c7c to disappear Oct 14 14:42:16.272: INFO: Pod pod-projected-secrets-b1548adc-e173-4103-a4b5-6c3bdcda9c7c no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:42:16.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9324" for this suite. • [SLOW TEST:7.323 seconds] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":303,"completed":146,"skipped":2106,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:42:16.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 14 14:42:25.699: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 14 14:42:27.718: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738283345, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738283345, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738283345, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738283345, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 14 14:42:30.859: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:42:43.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6881" for this suite. STEP: Destroying namespace "webhook-6881-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:26.923 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":303,"completed":147,"skipped":2114,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:42:43.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 14 14:42:43.346: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b8e84d38-33f0-4360-b1c4-83a0df22f32a" in namespace "projected-5557" to be "Succeeded or Failed" Oct 14 14:42:43.355: INFO: Pod "downwardapi-volume-b8e84d38-33f0-4360-b1c4-83a0df22f32a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.820885ms Oct 14 14:42:45.368: INFO: Pod "downwardapi-volume-b8e84d38-33f0-4360-b1c4-83a0df22f32a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021092344s Oct 14 14:42:47.386: INFO: Pod "downwardapi-volume-b8e84d38-33f0-4360-b1c4-83a0df22f32a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039800669s STEP: Saw pod success Oct 14 14:42:47.387: INFO: Pod "downwardapi-volume-b8e84d38-33f0-4360-b1c4-83a0df22f32a" satisfied condition "Succeeded or Failed" Oct 14 14:42:47.392: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-b8e84d38-33f0-4360-b1c4-83a0df22f32a container client-container: STEP: delete the pod Oct 14 14:42:47.425: INFO: Waiting for pod downwardapi-volume-b8e84d38-33f0-4360-b1c4-83a0df22f32a to disappear Oct 14 14:42:47.434: INFO: Pod downwardapi-volume-b8e84d38-33f0-4360-b1c4-83a0df22f32a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:42:47.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5557" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":148,"skipped":2136,"failed":0} S ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:42:47.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Oct 14 14:42:52.753: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:42:52.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-2226" for this suite. • [SLOW TEST:5.675 seconds] [sig-apps] ReplicaSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":303,"completed":149,"skipped":2137,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:42:53.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-197246e1-7a72-443b-a6e0-5b4191ec9781 STEP: Creating a pod to test consume secrets Oct 14 14:42:53.282: INFO: Waiting up to 5m0s for pod "pod-secrets-96323ee2-2a9b-4006-a74d-02ccab81820d" in namespace "secrets-9092" to be "Succeeded or Failed" Oct 14 14:42:53.298: INFO: Pod "pod-secrets-96323ee2-2a9b-4006-a74d-02ccab81820d": Phase="Pending", Reason="", readiness=false. Elapsed: 16.156803ms Oct 14 14:42:55.354: INFO: Pod "pod-secrets-96323ee2-2a9b-4006-a74d-02ccab81820d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071880118s Oct 14 14:42:57.524: INFO: Pod "pod-secrets-96323ee2-2a9b-4006-a74d-02ccab81820d": Phase="Running", Reason="", readiness=true. Elapsed: 4.242627388s Oct 14 14:42:59.532: INFO: Pod "pod-secrets-96323ee2-2a9b-4006-a74d-02ccab81820d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.249727818s STEP: Saw pod success Oct 14 14:42:59.532: INFO: Pod "pod-secrets-96323ee2-2a9b-4006-a74d-02ccab81820d" satisfied condition "Succeeded or Failed" Oct 14 14:42:59.537: INFO: Trying to get logs from node latest-worker pod pod-secrets-96323ee2-2a9b-4006-a74d-02ccab81820d container secret-volume-test: STEP: delete the pod Oct 14 14:42:59.593: INFO: Waiting for pod pod-secrets-96323ee2-2a9b-4006-a74d-02ccab81820d to disappear Oct 14 14:42:59.636: INFO: Pod pod-secrets-96323ee2-2a9b-4006-a74d-02ccab81820d no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:42:59.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9092" for this suite. • [SLOW TEST:6.527 seconds] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":150,"skipped":2150,"failed":0} SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:42:59.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-45351445-86ef-43ca-9876-ed5691bac1d1 STEP: Creating a pod to test consume secrets Oct 14 14:43:00.114: INFO: Waiting up to 5m0s for pod "pod-secrets-e8bd30b6-1392-45ac-918c-cecd2b992850" in namespace "secrets-7417" to be "Succeeded or Failed" Oct 14 14:43:00.145: INFO: Pod "pod-secrets-e8bd30b6-1392-45ac-918c-cecd2b992850": Phase="Pending", Reason="", readiness=false. Elapsed: 30.291189ms Oct 14 14:43:02.155: INFO: Pod "pod-secrets-e8bd30b6-1392-45ac-918c-cecd2b992850": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040600821s Oct 14 14:43:04.162: INFO: Pod "pod-secrets-e8bd30b6-1392-45ac-918c-cecd2b992850": Phase="Running", Reason="", readiness=true. Elapsed: 4.047442792s Oct 14 14:43:06.178: INFO: Pod "pod-secrets-e8bd30b6-1392-45ac-918c-cecd2b992850": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.063239217s STEP: Saw pod success Oct 14 14:43:06.178: INFO: Pod "pod-secrets-e8bd30b6-1392-45ac-918c-cecd2b992850" satisfied condition "Succeeded or Failed" Oct 14 14:43:06.183: INFO: Trying to get logs from node latest-worker pod pod-secrets-e8bd30b6-1392-45ac-918c-cecd2b992850 container secret-volume-test: STEP: delete the pod Oct 14 14:43:06.284: INFO: Waiting for pod pod-secrets-e8bd30b6-1392-45ac-918c-cecd2b992850 to disappear Oct 14 14:43:06.288: INFO: Pod pod-secrets-e8bd30b6-1392-45ac-918c-cecd2b992850 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:43:06.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7417" for this suite. • [SLOW TEST:6.651 seconds] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":151,"skipped":2155,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:43:06.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-aac86df0-d8fc-47b1-9be1-3e094d452967 STEP: Creating a pod to test consume secrets Oct 14 14:43:06.501: INFO: Waiting up to 5m0s for pod "pod-secrets-898c4f88-b47e-4ff0-948b-81ca9475da5d" in namespace "secrets-8411" to be "Succeeded or Failed" Oct 14 14:43:06.578: INFO: Pod "pod-secrets-898c4f88-b47e-4ff0-948b-81ca9475da5d": Phase="Pending", Reason="", readiness=false. Elapsed: 76.067956ms Oct 14 14:43:08.632: INFO: Pod "pod-secrets-898c4f88-b47e-4ff0-948b-81ca9475da5d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130398478s Oct 14 14:43:10.639: INFO: Pod "pod-secrets-898c4f88-b47e-4ff0-948b-81ca9475da5d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.137113453s STEP: Saw pod success Oct 14 14:43:10.639: INFO: Pod "pod-secrets-898c4f88-b47e-4ff0-948b-81ca9475da5d" satisfied condition "Succeeded or Failed" Oct 14 14:43:10.800: INFO: Trying to get logs from node latest-worker pod pod-secrets-898c4f88-b47e-4ff0-948b-81ca9475da5d container secret-volume-test: STEP: delete the pod Oct 14 14:43:10.934: INFO: Waiting for pod pod-secrets-898c4f88-b47e-4ff0-948b-81ca9475da5d to disappear Oct 14 14:43:10.940: INFO: Pod pod-secrets-898c4f88-b47e-4ff0-948b-81ca9475da5d no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:43:10.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8411" for this suite. STEP: Destroying namespace "secret-namespace-4716" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":303,"completed":152,"skipped":2192,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Events should delete a collection of events [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Events /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:43:10.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of events [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of events Oct 14 14:43:11.067: INFO: created test-event-1 Oct 14 14:43:11.079: INFO: created test-event-2 Oct 14 14:43:11.084: INFO: created test-event-3 STEP: get a list of Events with a label in the current namespace STEP: delete collection of events Oct 14 14:43:11.101: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity Oct 14 14:43:11.186: INFO: requesting list of events to confirm quantity [AfterEach] [sig-api-machinery] Events /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:43:11.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-6854" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should delete a collection of events [Conformance]","total":303,"completed":153,"skipped":2197,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:43:11.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 14:43:11.355: INFO: Create a RollingUpdate DaemonSet Oct 14 14:43:11.361: INFO: Check that daemon pods launch on every node of the cluster Oct 14 14:43:11.406: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 14:43:11.463: INFO: Number of nodes with available pods: 0 Oct 14 14:43:11.463: INFO: Node latest-worker is running more than one daemon pod Oct 14 14:43:12.487: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 14:43:12.493: INFO: Number of nodes with available pods: 0 Oct 14 14:43:12.493: INFO: Node latest-worker is running more than one daemon pod Oct 14 14:43:13.634: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 14:43:13.806: INFO: Number of nodes with available pods: 0 Oct 14 14:43:13.807: INFO: Node latest-worker is running more than one daemon pod Oct 14 14:43:14.536: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 14:43:14.611: INFO: Number of nodes with available pods: 0 Oct 14 14:43:14.611: INFO: Node latest-worker is running more than one daemon pod Oct 14 14:43:15.476: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 14:43:15.528: INFO: Number of nodes with available pods: 1 Oct 14 14:43:15.528: INFO: Node latest-worker2 is running more than one daemon pod Oct 14 14:43:16.493: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 14:43:16.512: INFO: Number of nodes with available pods: 2 Oct 14 14:43:16.513: INFO: Number of running nodes: 2, number of available pods: 2 Oct 14 14:43:16.513: INFO: Update the DaemonSet to trigger a rollout Oct 14 14:43:16.530: INFO: Updating DaemonSet daemon-set Oct 14 14:43:26.586: INFO: Roll back the DaemonSet before rollout is complete Oct 14 14:43:26.598: INFO: Updating DaemonSet daemon-set Oct 14 14:43:26.598: INFO: Make sure DaemonSet rollback is complete Oct 14 14:43:26.622: INFO: Wrong image for pod: daemon-set-rd6kd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Oct 14 14:43:26.622: INFO: Pod daemon-set-rd6kd is not available Oct 14 14:43:26.642: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 14:43:27.652: INFO: Wrong image for pod: daemon-set-rd6kd. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Oct 14 14:43:27.653: INFO: Pod daemon-set-rd6kd is not available Oct 14 14:43:27.662: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 14:43:28.653: INFO: Pod daemon-set-pfvzx is not available Oct 14 14:43:28.661: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5137, will wait for the garbage collector to delete the pods Oct 14 14:43:28.735: INFO: Deleting DaemonSet.extensions daemon-set took: 10.595235ms Oct 14 14:43:29.137: INFO: Terminating DaemonSet.extensions daemon-set pods took: 401.168273ms Oct 14 14:43:32.573: INFO: Number of nodes with available pods: 0 Oct 14 14:43:32.573: INFO: Number of running nodes: 0, number of available pods: 0 Oct 14 14:43:32.578: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5137/daemonsets","resourceVersion":"1145893"},"items":null} Oct 14 14:43:32.583: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5137/pods","resourceVersion":"1145893"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:43:32.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5137" for this suite. • [SLOW TEST:21.418 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":303,"completed":154,"skipped":2210,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:43:32.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs Oct 14 14:43:32.780: INFO: Waiting up to 5m0s for pod "pod-7b9e23f7-bf81-4095-bc11-98c19394ac95" in namespace "emptydir-9597" to be "Succeeded or Failed" Oct 14 14:43:32.785: INFO: Pod "pod-7b9e23f7-bf81-4095-bc11-98c19394ac95": Phase="Pending", Reason="", readiness=false. Elapsed: 4.986338ms Oct 14 14:43:34.792: INFO: Pod "pod-7b9e23f7-bf81-4095-bc11-98c19394ac95": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011444313s Oct 14 14:43:36.807: INFO: Pod "pod-7b9e23f7-bf81-4095-bc11-98c19394ac95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026586112s STEP: Saw pod success Oct 14 14:43:36.807: INFO: Pod "pod-7b9e23f7-bf81-4095-bc11-98c19394ac95" satisfied condition "Succeeded or Failed" Oct 14 14:43:36.815: INFO: Trying to get logs from node latest-worker pod pod-7b9e23f7-bf81-4095-bc11-98c19394ac95 container test-container: STEP: delete the pod Oct 14 14:43:36.853: INFO: Waiting for pod pod-7b9e23f7-bf81-4095-bc11-98c19394ac95 to disappear Oct 14 14:43:36.882: INFO: Pod pod-7b9e23f7-bf81-4095-bc11-98c19394ac95 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:43:36.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9597" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":155,"skipped":2229,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:43:36.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating replication controller my-hostname-basic-c72ce837-6491-410f-94b8-49192e59cf64 Oct 14 14:43:36.977: INFO: Pod name my-hostname-basic-c72ce837-6491-410f-94b8-49192e59cf64: Found 0 pods out of 1 Oct 14 14:43:41.992: INFO: Pod name my-hostname-basic-c72ce837-6491-410f-94b8-49192e59cf64: Found 1 pods out of 1 Oct 14 14:43:41.992: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-c72ce837-6491-410f-94b8-49192e59cf64" are running Oct 14 14:43:41.999: INFO: Pod "my-hostname-basic-c72ce837-6491-410f-94b8-49192e59cf64-d5gzp" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-10-14 14:43:37 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-10-14 14:43:40 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-10-14 14:43:40 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-10-14 14:43:36 +0000 UTC Reason: Message:}]) Oct 14 14:43:42.003: INFO: Trying to dial the pod Oct 14 14:43:47.024: INFO: Controller my-hostname-basic-c72ce837-6491-410f-94b8-49192e59cf64: Got expected result from replica 1 [my-hostname-basic-c72ce837-6491-410f-94b8-49192e59cf64-d5gzp]: "my-hostname-basic-c72ce837-6491-410f-94b8-49192e59cf64-d5gzp", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:43:47.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9867" for this suite. • [SLOW TEST:10.161 seconds] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":303,"completed":156,"skipped":2260,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:43:47.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-1416f065-3f73-4615-af37-264afaf55858 STEP: Creating a pod to test consume configMaps Oct 14 14:43:47.138: INFO: Waiting up to 5m0s for pod "pod-configmaps-23592e40-d12b-428b-a700-635413306c8b" in namespace "configmap-1085" to be "Succeeded or Failed" Oct 14 14:43:47.188: INFO: Pod "pod-configmaps-23592e40-d12b-428b-a700-635413306c8b": Phase="Pending", Reason="", readiness=false. Elapsed: 49.674397ms Oct 14 14:43:49.290: INFO: Pod "pod-configmaps-23592e40-d12b-428b-a700-635413306c8b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.151060477s Oct 14 14:43:51.298: INFO: Pod "pod-configmaps-23592e40-d12b-428b-a700-635413306c8b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.159822848s STEP: Saw pod success Oct 14 14:43:51.299: INFO: Pod "pod-configmaps-23592e40-d12b-428b-a700-635413306c8b" satisfied condition "Succeeded or Failed" Oct 14 14:43:51.304: INFO: Trying to get logs from node latest-worker pod pod-configmaps-23592e40-d12b-428b-a700-635413306c8b container configmap-volume-test: STEP: delete the pod Oct 14 14:43:51.352: INFO: Waiting for pod pod-configmaps-23592e40-d12b-428b-a700-635413306c8b to disappear Oct 14 14:43:51.403: INFO: Pod pod-configmaps-23592e40-d12b-428b-a700-635413306c8b no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:43:51.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1085" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":303,"completed":157,"skipped":2274,"failed":0} SS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:43:51.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 14 14:43:51.507: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 14 14:43:51.538: INFO: Waiting for terminating namespaces to be deleted... Oct 14 14:43:51.543: INFO: Logging pods the apiserver thinks is on node latest-worker before test Oct 14 14:43:51.552: INFO: kindnet-jwscz from kube-system started at 2020-10-10 08:58:57 +0000 UTC (1 container statuses recorded) Oct 14 14:43:51.553: INFO: Container kindnet-cni ready: true, restart count 0 Oct 14 14:43:51.553: INFO: kube-proxy-cg6dw from kube-system started at 2020-10-10 08:58:56 +0000 UTC (1 container statuses recorded) Oct 14 14:43:51.553: INFO: Container kube-proxy ready: true, restart count 0 Oct 14 14:43:51.553: INFO: my-hostname-basic-c72ce837-6491-410f-94b8-49192e59cf64-d5gzp from replication-controller-9867 started at 2020-10-14 14:43:37 +0000 UTC (1 container statuses recorded) Oct 14 14:43:51.553: INFO: Container my-hostname-basic-c72ce837-6491-410f-94b8-49192e59cf64 ready: true, restart count 0 Oct 14 14:43:51.553: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Oct 14 14:43:51.566: INFO: coredns-f9fd979d6-l8q79 from kube-system started at 2020-10-10 08:59:26 +0000 UTC (1 container statuses recorded) Oct 14 14:43:51.567: INFO: Container coredns ready: true, restart count 0 Oct 14 14:43:51.567: INFO: coredns-f9fd979d6-rhzs8 from kube-system started at 2020-10-10 08:59:16 +0000 UTC (1 container statuses recorded) Oct 14 14:43:51.567: INFO: Container coredns ready: true, restart count 0 Oct 14 14:43:51.567: INFO: kindnet-g7vp5 from kube-system started at 2020-10-10 08:58:57 +0000 UTC (1 container statuses recorded) Oct 14 14:43:51.567: INFO: Container kindnet-cni ready: true, restart count 0 Oct 14 14:43:51.567: INFO: kube-proxy-bmxmj from kube-system started at 2020-10-10 08:58:56 +0000 UTC (1 container statuses recorded) Oct 14 14:43:51.567: INFO: Container kube-proxy ready: true, restart count 0 Oct 14 14:43:51.567: INFO: local-path-provisioner-78776bfc44-6tlk5 from local-path-storage started at 2020-10-10 08:59:16 +0000 UTC (1 container statuses recorded) Oct 14 14:43:51.567: INFO: Container local-path-provisioner ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-10c7b861-173b-441a-a8df-20f078b11285 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-10c7b861-173b-441a-a8df-20f078b11285 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-10c7b861-173b-441a-a8df-20f078b11285 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:44:07.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4209" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:16.462 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":303,"completed":158,"skipped":2276,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:44:07.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD Oct 14 14:44:08.017: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:45:48.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6218" for this suite. • [SLOW TEST:100.575 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":303,"completed":159,"skipped":2282,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:45:48.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6697.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6697.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6697.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6697.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6697.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-6697.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6697.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-6697.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6697.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-6697.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6697.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-6697.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6697.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 165.75.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.75.165_udp@PTR;check="$$(dig +tcp +noall +answer +search 165.75.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.75.165_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6697.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6697.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6697.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6697.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6697.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-6697.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6697.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-6697.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6697.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-6697.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6697.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-6697.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6697.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 165.75.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.75.165_udp@PTR;check="$$(dig +tcp +noall +answer +search 165.75.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.75.165_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 14 14:45:54.807: INFO: Unable to read wheezy_udp@dns-test-service.dns-6697.svc.cluster.local from pod dns-6697/dns-test-41870635-5c2c-4985-9942-41aaa714bd7b: the server could not find the requested resource (get pods dns-test-41870635-5c2c-4985-9942-41aaa714bd7b) Oct 14 14:45:54.814: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6697.svc.cluster.local from pod dns-6697/dns-test-41870635-5c2c-4985-9942-41aaa714bd7b: the server could not find the requested resource (get pods dns-test-41870635-5c2c-4985-9942-41aaa714bd7b) Oct 14 14:45:54.818: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6697.svc.cluster.local from pod dns-6697/dns-test-41870635-5c2c-4985-9942-41aaa714bd7b: the server could not find the requested resource (get pods dns-test-41870635-5c2c-4985-9942-41aaa714bd7b) Oct 14 14:45:54.823: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6697.svc.cluster.local from pod dns-6697/dns-test-41870635-5c2c-4985-9942-41aaa714bd7b: the server could not find the requested resource (get pods dns-test-41870635-5c2c-4985-9942-41aaa714bd7b) Oct 14 14:45:54.846: INFO: Unable to read jessie_udp@dns-test-service.dns-6697.svc.cluster.local from pod dns-6697/dns-test-41870635-5c2c-4985-9942-41aaa714bd7b: the server could not find the requested resource (get pods dns-test-41870635-5c2c-4985-9942-41aaa714bd7b) Oct 14 14:45:54.849: INFO: Unable to read jessie_tcp@dns-test-service.dns-6697.svc.cluster.local from pod dns-6697/dns-test-41870635-5c2c-4985-9942-41aaa714bd7b: the server could not find the requested resource (get pods dns-test-41870635-5c2c-4985-9942-41aaa714bd7b) Oct 14 14:45:54.853: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6697.svc.cluster.local from pod dns-6697/dns-test-41870635-5c2c-4985-9942-41aaa714bd7b: the server could not find the requested resource (get pods dns-test-41870635-5c2c-4985-9942-41aaa714bd7b) Oct 14 14:45:54.857: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6697.svc.cluster.local from pod dns-6697/dns-test-41870635-5c2c-4985-9942-41aaa714bd7b: the server could not find the requested resource (get pods dns-test-41870635-5c2c-4985-9942-41aaa714bd7b) Oct 14 14:45:54.878: INFO: Lookups using dns-6697/dns-test-41870635-5c2c-4985-9942-41aaa714bd7b failed for: [wheezy_udp@dns-test-service.dns-6697.svc.cluster.local wheezy_tcp@dns-test-service.dns-6697.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6697.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6697.svc.cluster.local jessie_udp@dns-test-service.dns-6697.svc.cluster.local jessie_tcp@dns-test-service.dns-6697.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6697.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6697.svc.cluster.local] Oct 14 14:45:59.885: INFO: Unable to read wheezy_udp@dns-test-service.dns-6697.svc.cluster.local from pod dns-6697/dns-test-41870635-5c2c-4985-9942-41aaa714bd7b: the server could not find the requested resource (get pods dns-test-41870635-5c2c-4985-9942-41aaa714bd7b) Oct 14 14:45:59.891: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6697.svc.cluster.local from pod dns-6697/dns-test-41870635-5c2c-4985-9942-41aaa714bd7b: the server could not find the requested resource (get pods dns-test-41870635-5c2c-4985-9942-41aaa714bd7b) Oct 14 14:45:59.896: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6697.svc.cluster.local from pod dns-6697/dns-test-41870635-5c2c-4985-9942-41aaa714bd7b: the server could not find the requested resource (get pods dns-test-41870635-5c2c-4985-9942-41aaa714bd7b) Oct 14 14:45:59.901: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6697.svc.cluster.local from pod dns-6697/dns-test-41870635-5c2c-4985-9942-41aaa714bd7b: the server could not find the requested resource (get pods dns-test-41870635-5c2c-4985-9942-41aaa714bd7b) Oct 14 14:45:59.940: INFO: Unable to read jessie_udp@dns-test-service.dns-6697.svc.cluster.local from pod dns-6697/dns-test-41870635-5c2c-4985-9942-41aaa714bd7b: the server could not find the requested resource (get pods dns-test-41870635-5c2c-4985-9942-41aaa714bd7b) Oct 14 14:45:59.944: INFO: Unable to read jessie_tcp@dns-test-service.dns-6697.svc.cluster.local from pod dns-6697/dns-test-41870635-5c2c-4985-9942-41aaa714bd7b: the server could not find the requested resource (get pods dns-test-41870635-5c2c-4985-9942-41aaa714bd7b) Oct 14 14:45:59.966: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6697.svc.cluster.local from pod dns-6697/dns-test-41870635-5c2c-4985-9942-41aaa714bd7b: the server could not find the requested resource (get pods dns-test-41870635-5c2c-4985-9942-41aaa714bd7b) Oct 14 14:45:59.970: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6697.svc.cluster.local from pod dns-6697/dns-test-41870635-5c2c-4985-9942-41aaa714bd7b: the server could not find the requested resource (get pods dns-test-41870635-5c2c-4985-9942-41aaa714bd7b) Oct 14 14:45:59.995: INFO: Lookups using dns-6697/dns-test-41870635-5c2c-4985-9942-41aaa714bd7b failed for: [wheezy_udp@dns-test-service.dns-6697.svc.cluster.local wheezy_tcp@dns-test-service.dns-6697.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6697.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6697.svc.cluster.local jessie_udp@dns-test-service.dns-6697.svc.cluster.local jessie_tcp@dns-test-service.dns-6697.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6697.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6697.svc.cluster.local] Oct 14 14:46:04.885: INFO: Unable to read wheezy_udp@dns-test-service.dns-6697.svc.cluster.local from pod dns-6697/dns-test-41870635-5c2c-4985-9942-41aaa714bd7b: the server could not find the requested resource (get pods dns-test-41870635-5c2c-4985-9942-41aaa714bd7b) Oct 14 14:46:04.891: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6697.svc.cluster.local from pod dns-6697/dns-test-41870635-5c2c-4985-9942-41aaa714bd7b: the server could not find the requested resource (get pods dns-test-41870635-5c2c-4985-9942-41aaa714bd7b) Oct 14 14:46:04.895: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6697.svc.cluster.local from pod dns-6697/dns-test-41870635-5c2c-4985-9942-41aaa714bd7b: the server could not find the requested resource (get pods dns-test-41870635-5c2c-4985-9942-41aaa714bd7b) Oct 14 14:46:04.899: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6697.svc.cluster.local from pod dns-6697/dns-test-41870635-5c2c-4985-9942-41aaa714bd7b: the server could not find the requested resource (get pods dns-test-41870635-5c2c-4985-9942-41aaa714bd7b) Oct 14 14:46:04.927: INFO: Unable to read jessie_udp@dns-test-service.dns-6697.svc.cluster.local from pod dns-6697/dns-test-41870635-5c2c-4985-9942-41aaa714bd7b: the server could not find the requested resource (get pods dns-test-41870635-5c2c-4985-9942-41aaa714bd7b) Oct 14 14:46:04.931: INFO: Unable to read jessie_tcp@dns-test-service.dns-6697.svc.cluster.local from pod dns-6697/dns-test-41870635-5c2c-4985-9942-41aaa714bd7b: the server could not find the requested resource (get pods dns-test-41870635-5c2c-4985-9942-41aaa714bd7b) Oct 14 14:46:04.935: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6697.svc.cluster.local from pod dns-6697/dns-test-41870635-5c2c-4985-9942-41aaa714bd7b: the server could not find the requested resource (get pods dns-test-41870635-5c2c-4985-9942-41aaa714bd7b) Oct 14 14:46:04.938: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6697.svc.cluster.local from pod dns-6697/dns-test-41870635-5c2c-4985-9942-41aaa714bd7b: the server could not find the requested resource (get pods dns-test-41870635-5c2c-4985-9942-41aaa714bd7b) Oct 14 14:46:04.960: INFO: Lookups using dns-6697/dns-test-41870635-5c2c-4985-9942-41aaa714bd7b failed for: [wheezy_udp@dns-test-service.dns-6697.svc.cluster.local wheezy_tcp@dns-test-service.dns-6697.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6697.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6697.svc.cluster.local jessie_udp@dns-test-service.dns-6697.svc.cluster.local jessie_tcp@dns-test-service.dns-6697.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6697.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6697.svc.cluster.local] Oct 14 14:46:09.886: INFO: Unable to read wheezy_udp@dns-test-service.dns-6697.svc.cluster.local from pod dns-6697/dns-test-41870635-5c2c-4985-9942-41aaa714bd7b: the server could not find the requested resource (get pods dns-test-41870635-5c2c-4985-9942-41aaa714bd7b) Oct 14 14:46:09.891: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6697.svc.cluster.local from pod dns-6697/dns-test-41870635-5c2c-4985-9942-41aaa714bd7b: the server could not find the requested resource (get pods dns-test-41870635-5c2c-4985-9942-41aaa714bd7b) Oct 14 14:46:09.897: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6697.svc.cluster.local from pod dns-6697/dns-test-41870635-5c2c-4985-9942-41aaa714bd7b: the server could not find the requested resource (get pods dns-test-41870635-5c2c-4985-9942-41aaa714bd7b) Oct 14 14:46:09.901: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6697.svc.cluster.local from pod dns-6697/dns-test-41870635-5c2c-4985-9942-41aaa714bd7b: the server could not find the requested resource (get pods dns-test-41870635-5c2c-4985-9942-41aaa714bd7b) Oct 14 14:46:09.963: INFO: Unable to read jessie_udp@dns-test-service.dns-6697.svc.cluster.local from pod dns-6697/dns-test-41870635-5c2c-4985-9942-41aaa714bd7b: the server could not find the requested resource (get pods dns-test-41870635-5c2c-4985-9942-41aaa714bd7b) Oct 14 14:46:09.967: INFO: Unable to read jessie_tcp@dns-test-service.dns-6697.svc.cluster.local from pod dns-6697/dns-test-41870635-5c2c-4985-9942-41aaa714bd7b: the server could not find the requested resource (get pods dns-test-41870635-5c2c-4985-9942-41aaa714bd7b) Oct 14 14:46:09.970: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6697.svc.cluster.local from pod dns-6697/dns-test-41870635-5c2c-4985-9942-41aaa714bd7b: the server could not find the requested resource (get pods dns-test-41870635-5c2c-4985-9942-41aaa714bd7b) Oct 14 14:46:09.990: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6697.svc.cluster.local from pod dns-6697/dns-test-41870635-5c2c-4985-9942-41aaa714bd7b: the server could not find the requested resource (get pods dns-test-41870635-5c2c-4985-9942-41aaa714bd7b) Oct 14 14:46:10.017: INFO: Lookups using dns-6697/dns-test-41870635-5c2c-4985-9942-41aaa714bd7b failed for: [wheezy_udp@dns-test-service.dns-6697.svc.cluster.local wheezy_tcp@dns-test-service.dns-6697.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6697.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6697.svc.cluster.local jessie_udp@dns-test-service.dns-6697.svc.cluster.local jessie_tcp@dns-test-service.dns-6697.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6697.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6697.svc.cluster.local] Oct 14 14:46:14.886: INFO: Unable to read wheezy_udp@dns-test-service.dns-6697.svc.cluster.local from pod dns-6697/dns-test-41870635-5c2c-4985-9942-41aaa714bd7b: the server could not find the requested resource (get pods dns-test-41870635-5c2c-4985-9942-41aaa714bd7b) Oct 14 14:46:14.892: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6697.svc.cluster.local from pod dns-6697/dns-test-41870635-5c2c-4985-9942-41aaa714bd7b: the server could not find the requested resource (get pods dns-test-41870635-5c2c-4985-9942-41aaa714bd7b) Oct 14 14:46:14.897: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6697.svc.cluster.local from pod dns-6697/dns-test-41870635-5c2c-4985-9942-41aaa714bd7b: the server could not find the requested resource (get pods dns-test-41870635-5c2c-4985-9942-41aaa714bd7b) Oct 14 14:46:14.900: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6697.svc.cluster.local from pod dns-6697/dns-test-41870635-5c2c-4985-9942-41aaa714bd7b: the server could not find the requested resource (get pods dns-test-41870635-5c2c-4985-9942-41aaa714bd7b) Oct 14 14:46:14.927: INFO: Unable to read jessie_udp@dns-test-service.dns-6697.svc.cluster.local from pod dns-6697/dns-test-41870635-5c2c-4985-9942-41aaa714bd7b: the server could not find the requested resource (get pods dns-test-41870635-5c2c-4985-9942-41aaa714bd7b) Oct 14 14:46:14.931: INFO: Unable to read jessie_tcp@dns-test-service.dns-6697.svc.cluster.local from pod dns-6697/dns-test-41870635-5c2c-4985-9942-41aaa714bd7b: the server could not find the requested resource (get pods dns-test-41870635-5c2c-4985-9942-41aaa714bd7b) Oct 14 14:46:14.936: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6697.svc.cluster.local from pod dns-6697/dns-test-41870635-5c2c-4985-9942-41aaa714bd7b: the server could not find the requested resource (get pods dns-test-41870635-5c2c-4985-9942-41aaa714bd7b) Oct 14 14:46:14.940: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6697.svc.cluster.local from pod dns-6697/dns-test-41870635-5c2c-4985-9942-41aaa714bd7b: the server could not find the requested resource (get pods dns-test-41870635-5c2c-4985-9942-41aaa714bd7b) Oct 14 14:46:14.966: INFO: Lookups using dns-6697/dns-test-41870635-5c2c-4985-9942-41aaa714bd7b failed for: [wheezy_udp@dns-test-service.dns-6697.svc.cluster.local wheezy_tcp@dns-test-service.dns-6697.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6697.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6697.svc.cluster.local jessie_udp@dns-test-service.dns-6697.svc.cluster.local jessie_tcp@dns-test-service.dns-6697.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6697.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6697.svc.cluster.local] Oct 14 14:46:19.886: INFO: Unable to read wheezy_udp@dns-test-service.dns-6697.svc.cluster.local from pod dns-6697/dns-test-41870635-5c2c-4985-9942-41aaa714bd7b: the server could not find the requested resource (get pods dns-test-41870635-5c2c-4985-9942-41aaa714bd7b) Oct 14 14:46:19.891: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6697.svc.cluster.local from pod dns-6697/dns-test-41870635-5c2c-4985-9942-41aaa714bd7b: the server could not find the requested resource (get pods dns-test-41870635-5c2c-4985-9942-41aaa714bd7b) Oct 14 14:46:19.896: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6697.svc.cluster.local from pod dns-6697/dns-test-41870635-5c2c-4985-9942-41aaa714bd7b: the server could not find the requested resource (get pods dns-test-41870635-5c2c-4985-9942-41aaa714bd7b) Oct 14 14:46:19.900: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6697.svc.cluster.local from pod dns-6697/dns-test-41870635-5c2c-4985-9942-41aaa714bd7b: the server could not find the requested resource (get pods dns-test-41870635-5c2c-4985-9942-41aaa714bd7b) Oct 14 14:46:19.927: INFO: Unable to read jessie_udp@dns-test-service.dns-6697.svc.cluster.local from pod dns-6697/dns-test-41870635-5c2c-4985-9942-41aaa714bd7b: the server could not find the requested resource (get pods dns-test-41870635-5c2c-4985-9942-41aaa714bd7b) Oct 14 14:46:19.931: INFO: Unable to read jessie_tcp@dns-test-service.dns-6697.svc.cluster.local from pod dns-6697/dns-test-41870635-5c2c-4985-9942-41aaa714bd7b: the server could not find the requested resource (get pods dns-test-41870635-5c2c-4985-9942-41aaa714bd7b) Oct 14 14:46:19.934: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6697.svc.cluster.local from pod dns-6697/dns-test-41870635-5c2c-4985-9942-41aaa714bd7b: the server could not find the requested resource (get pods dns-test-41870635-5c2c-4985-9942-41aaa714bd7b) Oct 14 14:46:19.938: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6697.svc.cluster.local from pod dns-6697/dns-test-41870635-5c2c-4985-9942-41aaa714bd7b: the server could not find the requested resource (get pods dns-test-41870635-5c2c-4985-9942-41aaa714bd7b) Oct 14 14:46:19.959: INFO: Lookups using dns-6697/dns-test-41870635-5c2c-4985-9942-41aaa714bd7b failed for: [wheezy_udp@dns-test-service.dns-6697.svc.cluster.local wheezy_tcp@dns-test-service.dns-6697.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6697.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6697.svc.cluster.local jessie_udp@dns-test-service.dns-6697.svc.cluster.local jessie_tcp@dns-test-service.dns-6697.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6697.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6697.svc.cluster.local] Oct 14 14:46:25.018: INFO: DNS probes using dns-6697/dns-test-41870635-5c2c-4985-9942-41aaa714bd7b succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:46:25.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6697" for this suite. • [SLOW TEST:37.413 seconds] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":303,"completed":160,"skipped":2302,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:46:25.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if kubectl can dry-run update Pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Oct 14 14:46:25.977: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-231' Oct 14 14:46:27.304: INFO: stderr: "" Oct 14 14:46:27.304: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: replace the image in the pod with server-side dry-run Oct 14 14:46:27.304: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod -o json --namespace=kubectl-231' Oct 14 14:46:28.637: INFO: stderr: "" Oct 14 14:46:28.637: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-10-14T14:46:27Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl-run\",\n \"operation\": \"Update\",\n \"time\": \"2020-10-14T14:46:27Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:message\": {},\n \"f:reason\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:message\": {},\n \"f:reason\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2020-10-14T14:46:27Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-231\",\n \"resourceVersion\": \"1146662\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-231/pods/e2e-test-httpd-pod\",\n \"uid\": \"60159bd4-5b72-4fb3-adf3-838cd3e948f1\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-xxc99\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-xxc99\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-xxc99\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-10-14T14:46:27Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-10-14T14:46:27Z\",\n \"message\": \"containers with unready status: [e2e-test-httpd-pod]\",\n \"reason\": \"ContainersNotReady\",\n \"status\": \"False\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-10-14T14:46:27Z\",\n \"message\": \"containers with unready status: [e2e-test-httpd-pod]\",\n \"reason\": \"ContainersNotReady\",\n \"status\": \"False\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-10-14T14:46:27Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": false,\n \"restartCount\": 0,\n \"started\": false,\n \"state\": {\n \"waiting\": {\n \"reason\": \"ContainerCreating\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.15\",\n \"phase\": \"Pending\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-10-14T14:46:27Z\"\n }\n}\n" Oct 14 14:46:28.639: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config replace -f - --dry-run server --namespace=kubectl-231' Oct 14 14:46:31.371: INFO: stderr: "W1014 14:46:29.498111 2677 helpers.go:553] --dry-run is deprecated and can be replaced with --dry-run=client.\n" Oct 14 14:46:31.371: INFO: stdout: "pod/e2e-test-httpd-pod replaced (dry run)\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/httpd:2.4.38-alpine Oct 14 14:46:31.376: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-231' Oct 14 14:46:45.630: INFO: stderr: "" Oct 14 14:46:45.630: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:46:45.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-231" for this suite. • [SLOW TEST:19.767 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl server-side dry-run /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:919 should check if kubectl can dry-run update Pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":303,"completed":161,"skipped":2306,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:46:45.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Oct 14 14:46:45.768: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4828 /api/v1/namespaces/watch-4828/configmaps/e2e-watch-test-watch-closed 16e5a75b-f01c-477b-9e73-09bc74ca6420 1146746 0 2020-10-14 14:46:45 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-10-14 14:46:45 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Oct 14 14:46:45.770: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4828 /api/v1/namespaces/watch-4828/configmaps/e2e-watch-test-watch-closed 16e5a75b-f01c-477b-9e73-09bc74ca6420 1146747 0 2020-10-14 14:46:45 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-10-14 14:46:45 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Oct 14 14:46:45.805: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4828 /api/v1/namespaces/watch-4828/configmaps/e2e-watch-test-watch-closed 16e5a75b-f01c-477b-9e73-09bc74ca6420 1146748 0 2020-10-14 14:46:45 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-10-14 14:46:45 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 14 14:46:45.807: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4828 /api/v1/namespaces/watch-4828/configmaps/e2e-watch-test-watch-closed 16e5a75b-f01c-477b-9e73-09bc74ca6420 1146749 0 2020-10-14 14:46:45 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-10-14 14:46:45 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:46:45.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4828" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":303,"completed":162,"skipped":2356,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:46:45.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Oct 14 14:46:45.895: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Oct 14 14:47:46.744: INFO: >>> kubeConfig: /root/.kube/config Oct 14 14:48:06.722: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:49:26.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7588" for this suite. • [SLOW TEST:160.850 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":303,"completed":163,"skipped":2369,"failed":0} SSS ------------------------------ [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:49:26.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath Oct 14 14:49:30.949: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-6528 PodName:var-expansion-20f4d2f0-0a61-436a-bd87-18415fb40b6d ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 14 14:49:30.950: INFO: >>> kubeConfig: /root/.kube/config I1014 14:49:31.059202 11 log.go:181] (0xbbae620) (0xbbae690) Create stream I1014 14:49:31.059357 11 log.go:181] (0xbbae620) (0xbbae690) Stream added, broadcasting: 1 I1014 14:49:31.063041 11 log.go:181] (0xbbae620) Reply frame received for 1 I1014 14:49:31.063266 11 log.go:181] (0xbbae620) (0x9732070) Create stream I1014 14:49:31.063409 11 log.go:181] (0xbbae620) (0x9732070) Stream added, broadcasting: 3 I1014 14:49:31.065528 11 log.go:181] (0xbbae620) Reply frame received for 3 I1014 14:49:31.065785 11 log.go:181] (0xbbae620) (0xbbae850) Create stream I1014 14:49:31.065895 11 log.go:181] (0xbbae620) (0xbbae850) Stream added, broadcasting: 5 I1014 14:49:31.067494 11 log.go:181] (0xbbae620) Reply frame received for 5 I1014 14:49:31.154284 11 log.go:181] (0xbbae620) Data frame received for 3 I1014 14:49:31.154538 11 log.go:181] (0x9732070) (3) Data frame handling I1014 14:49:31.154666 11 log.go:181] (0xbbae620) Data frame received for 5 I1014 14:49:31.154790 11 log.go:181] (0xbbae850) (5) Data frame handling I1014 14:49:31.155751 11 log.go:181] (0xbbae620) Data frame received for 1 I1014 14:49:31.155852 11 log.go:181] (0xbbae690) (1) Data frame handling I1014 14:49:31.155956 11 log.go:181] (0xbbae690) (1) Data frame sent I1014 14:49:31.156061 11 log.go:181] (0xbbae620) (0xbbae690) Stream removed, broadcasting: 1 I1014 14:49:31.156195 11 log.go:181] (0xbbae620) Go away received I1014 14:49:31.156718 11 log.go:181] (0xbbae620) (0xbbae690) Stream removed, broadcasting: 1 I1014 14:49:31.157069 11 log.go:181] (0xbbae620) (0x9732070) Stream removed, broadcasting: 3 I1014 14:49:31.157207 11 log.go:181] (0xbbae620) (0xbbae850) Stream removed, broadcasting: 5 STEP: test for file in mounted path Oct 14 14:49:31.164: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-6528 PodName:var-expansion-20f4d2f0-0a61-436a-bd87-18415fb40b6d ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 14 14:49:31.165: INFO: >>> kubeConfig: /root/.kube/config I1014 14:49:31.271849 11 log.go:181] (0x88af260) (0x88af2d0) Create stream I1014 14:49:31.272064 11 log.go:181] (0x88af260) (0x88af2d0) Stream added, broadcasting: 1 I1014 14:49:31.277605 11 log.go:181] (0x88af260) Reply frame received for 1 I1014 14:49:31.277893 11 log.go:181] (0x88af260) (0xbbaed20) Create stream I1014 14:49:31.278020 11 log.go:181] (0x88af260) (0xbbaed20) Stream added, broadcasting: 3 I1014 14:49:31.280755 11 log.go:181] (0x88af260) Reply frame received for 3 I1014 14:49:31.281149 11 log.go:181] (0x88af260) (0xbbaeee0) Create stream I1014 14:49:31.281373 11 log.go:181] (0x88af260) (0xbbaeee0) Stream added, broadcasting: 5 I1014 14:49:31.282977 11 log.go:181] (0x88af260) Reply frame received for 5 I1014 14:49:31.347388 11 log.go:181] (0x88af260) Data frame received for 5 I1014 14:49:31.347569 11 log.go:181] (0xbbaeee0) (5) Data frame handling I1014 14:49:31.347712 11 log.go:181] (0x88af260) Data frame received for 3 I1014 14:49:31.347894 11 log.go:181] (0xbbaed20) (3) Data frame handling I1014 14:49:31.349246 11 log.go:181] (0x88af260) Data frame received for 1 I1014 14:49:31.349415 11 log.go:181] (0x88af2d0) (1) Data frame handling I1014 14:49:31.349584 11 log.go:181] (0x88af2d0) (1) Data frame sent I1014 14:49:31.349706 11 log.go:181] (0x88af260) (0x88af2d0) Stream removed, broadcasting: 1 I1014 14:49:31.349858 11 log.go:181] (0x88af260) Go away received I1014 14:49:31.350147 11 log.go:181] (0x88af260) (0x88af2d0) Stream removed, broadcasting: 1 I1014 14:49:31.350260 11 log.go:181] (0x88af260) (0xbbaed20) Stream removed, broadcasting: 3 I1014 14:49:31.350391 11 log.go:181] (0x88af260) (0xbbaeee0) Stream removed, broadcasting: 5 STEP: updating the annotation value Oct 14 14:49:31.867: INFO: Successfully updated pod "var-expansion-20f4d2f0-0a61-436a-bd87-18415fb40b6d" STEP: waiting for annotated pod running STEP: deleting the pod gracefully Oct 14 14:49:31.884: INFO: Deleting pod "var-expansion-20f4d2f0-0a61-436a-bd87-18415fb40b6d" in namespace "var-expansion-6528" Oct 14 14:49:31.890: INFO: Wait up to 5m0s for pod "var-expansion-20f4d2f0-0a61-436a-bd87-18415fb40b6d" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:50:15.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6528" for this suite. • [SLOW TEST:49.249 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":303,"completed":164,"skipped":2372,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:50:15.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should provide secure master service [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:50:16.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9937" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":303,"completed":165,"skipped":2407,"failed":0} S ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:50:16.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override all Oct 14 14:50:16.131: INFO: Waiting up to 5m0s for pod "client-containers-f6f6a50f-4586-496b-bdec-3b093c66b9f7" in namespace "containers-3182" to be "Succeeded or Failed" Oct 14 14:50:16.184: INFO: Pod "client-containers-f6f6a50f-4586-496b-bdec-3b093c66b9f7": Phase="Pending", Reason="", readiness=false. Elapsed: 52.693771ms Oct 14 14:50:18.192: INFO: Pod "client-containers-f6f6a50f-4586-496b-bdec-3b093c66b9f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060418354s Oct 14 14:50:20.200: INFO: Pod "client-containers-f6f6a50f-4586-496b-bdec-3b093c66b9f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.068503443s STEP: Saw pod success Oct 14 14:50:20.200: INFO: Pod "client-containers-f6f6a50f-4586-496b-bdec-3b093c66b9f7" satisfied condition "Succeeded or Failed" Oct 14 14:50:20.205: INFO: Trying to get logs from node latest-worker pod client-containers-f6f6a50f-4586-496b-bdec-3b093c66b9f7 container test-container: STEP: delete the pod Oct 14 14:50:20.478: INFO: Waiting for pod client-containers-f6f6a50f-4586-496b-bdec-3b093c66b9f7 to disappear Oct 14 14:50:20.548: INFO: Pod client-containers-f6f6a50f-4586-496b-bdec-3b093c66b9f7 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:50:20.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3182" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":303,"completed":166,"skipped":2408,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:50:20.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 14:50:20.702: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-0acb8434-63ec-4817-85fb-64ddcdb91151" in namespace "security-context-test-8179" to be "Succeeded or Failed" Oct 14 14:50:20.711: INFO: Pod "busybox-readonly-false-0acb8434-63ec-4817-85fb-64ddcdb91151": Phase="Pending", Reason="", readiness=false. Elapsed: 8.06045ms Oct 14 14:50:22.783: INFO: Pod "busybox-readonly-false-0acb8434-63ec-4817-85fb-64ddcdb91151": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080202253s Oct 14 14:50:24.794: INFO: Pod "busybox-readonly-false-0acb8434-63ec-4817-85fb-64ddcdb91151": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.091434179s Oct 14 14:50:24.794: INFO: Pod "busybox-readonly-false-0acb8434-63ec-4817-85fb-64ddcdb91151" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:50:24.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8179" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":303,"completed":167,"skipped":2416,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:50:24.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-d436ce9d-c2df-4ebc-94e4-a724719ec433 STEP: Creating a pod to test consume configMaps Oct 14 14:50:24.934: INFO: Waiting up to 5m0s for pod "pod-configmaps-d46292bf-0ac8-42b1-b5aa-599e68e5174f" in namespace "configmap-7776" to be "Succeeded or Failed" Oct 14 14:50:24.966: INFO: Pod "pod-configmaps-d46292bf-0ac8-42b1-b5aa-599e68e5174f": Phase="Pending", Reason="", readiness=false. Elapsed: 31.900467ms Oct 14 14:50:26.975: INFO: Pod "pod-configmaps-d46292bf-0ac8-42b1-b5aa-599e68e5174f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040289546s Oct 14 14:50:28.981: INFO: Pod "pod-configmaps-d46292bf-0ac8-42b1-b5aa-599e68e5174f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046926813s STEP: Saw pod success Oct 14 14:50:28.982: INFO: Pod "pod-configmaps-d46292bf-0ac8-42b1-b5aa-599e68e5174f" satisfied condition "Succeeded or Failed" Oct 14 14:50:28.986: INFO: Trying to get logs from node latest-worker pod pod-configmaps-d46292bf-0ac8-42b1-b5aa-599e68e5174f container configmap-volume-test: STEP: delete the pod Oct 14 14:50:29.041: INFO: Waiting for pod pod-configmaps-d46292bf-0ac8-42b1-b5aa-599e68e5174f to disappear Oct 14 14:50:29.074: INFO: Pod pod-configmaps-d46292bf-0ac8-42b1-b5aa-599e68e5174f no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:50:29.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7776" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":168,"skipped":2426,"failed":0} SSSSS ------------------------------ [sig-network] IngressClass API should support creating IngressClass API operations [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] IngressClass API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:50:29.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingressclass STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] IngressClass API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:148 [It] should support creating IngressClass API operations [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Oct 14 14:50:29.216: INFO: starting watch STEP: patching STEP: updating Oct 14 14:50:29.242: INFO: waiting for watch events with expected annotations Oct 14 14:50:29.244: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] IngressClass API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:50:29.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingressclass-7903" for this suite. •{"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":303,"completed":169,"skipped":2431,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:50:29.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:50:46.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1017" for this suite. • [SLOW TEST:17.181 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":303,"completed":170,"skipped":2443,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:50:46.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Oct 14 14:50:51.201: INFO: Successfully updated pod "annotationupdatef5adbc86-8520-4ce6-8f32-6111003326e0" [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:50:53.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7040" for this suite. • [SLOW TEST:6.788 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":303,"completed":171,"skipped":2483,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:50:53.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Oct 14 14:50:53.409: INFO: Waiting up to 5m0s for pod "downward-api-3b5a07f3-afb5-4785-92fc-fb5383fa0afb" in namespace "downward-api-4809" to be "Succeeded or Failed" Oct 14 14:50:53.418: INFO: Pod "downward-api-3b5a07f3-afb5-4785-92fc-fb5383fa0afb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.905601ms Oct 14 14:50:55.622: INFO: Pod "downward-api-3b5a07f3-afb5-4785-92fc-fb5383fa0afb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212483919s Oct 14 14:50:57.629: INFO: Pod "downward-api-3b5a07f3-afb5-4785-92fc-fb5383fa0afb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.219725096s STEP: Saw pod success Oct 14 14:50:57.629: INFO: Pod "downward-api-3b5a07f3-afb5-4785-92fc-fb5383fa0afb" satisfied condition "Succeeded or Failed" Oct 14 14:50:57.635: INFO: Trying to get logs from node latest-worker pod downward-api-3b5a07f3-afb5-4785-92fc-fb5383fa0afb container dapi-container: STEP: delete the pod Oct 14 14:50:57.693: INFO: Waiting for pod downward-api-3b5a07f3-afb5-4785-92fc-fb5383fa0afb to disappear Oct 14 14:50:57.735: INFO: Pod downward-api-3b5a07f3-afb5-4785-92fc-fb5383fa0afb no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:50:57.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4809" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":303,"completed":172,"skipped":2501,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:50:57.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-6024b30e-b16a-44bc-8f61-90213e80d2a9 STEP: Creating a pod to test consume configMaps Oct 14 14:50:57.983: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-aa0d7533-fbc1-4a2b-b8a9-b8ceb3e2e7ad" in namespace "projected-767" to be "Succeeded or Failed" Oct 14 14:50:58.004: INFO: Pod "pod-projected-configmaps-aa0d7533-fbc1-4a2b-b8a9-b8ceb3e2e7ad": Phase="Pending", Reason="", readiness=false. Elapsed: 21.316053ms Oct 14 14:51:00.085: INFO: Pod "pod-projected-configmaps-aa0d7533-fbc1-4a2b-b8a9-b8ceb3e2e7ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101663125s Oct 14 14:51:02.091: INFO: Pod "pod-projected-configmaps-aa0d7533-fbc1-4a2b-b8a9-b8ceb3e2e7ad": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10784231s Oct 14 14:51:04.097: INFO: Pod "pod-projected-configmaps-aa0d7533-fbc1-4a2b-b8a9-b8ceb3e2e7ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.114470251s STEP: Saw pod success Oct 14 14:51:04.098: INFO: Pod "pod-projected-configmaps-aa0d7533-fbc1-4a2b-b8a9-b8ceb3e2e7ad" satisfied condition "Succeeded or Failed" Oct 14 14:51:04.102: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-aa0d7533-fbc1-4a2b-b8a9-b8ceb3e2e7ad container projected-configmap-volume-test: STEP: delete the pod Oct 14 14:51:04.177: INFO: Waiting for pod pod-projected-configmaps-aa0d7533-fbc1-4a2b-b8a9-b8ceb3e2e7ad to disappear Oct 14 14:51:04.204: INFO: Pod pod-projected-configmaps-aa0d7533-fbc1-4a2b-b8a9-b8ceb3e2e7ad no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:51:04.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-767" for this suite. • [SLOW TEST:6.387 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":303,"completed":173,"skipped":2513,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:51:04.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:51:04.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-2551" for this suite. STEP: Destroying namespace "nspatchtest-e3bc6358-3df2-4703-9d35-93667c3672e8-2164" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":303,"completed":174,"skipped":2594,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:51:04.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:51:20.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6728" for this suite. • [SLOW TEST:16.287 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":303,"completed":175,"skipped":2601,"failed":0} SS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:51:20.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 14:51:20.869: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-004e02ac-a7a9-471a-896f-0eae7e74f043" in namespace "security-context-test-2885" to be "Succeeded or Failed" Oct 14 14:51:20.890: INFO: Pod "busybox-privileged-false-004e02ac-a7a9-471a-896f-0eae7e74f043": Phase="Pending", Reason="", readiness=false. Elapsed: 20.927688ms Oct 14 14:51:23.021: INFO: Pod "busybox-privileged-false-004e02ac-a7a9-471a-896f-0eae7e74f043": Phase="Pending", Reason="", readiness=false. Elapsed: 2.151509971s Oct 14 14:51:25.030: INFO: Pod "busybox-privileged-false-004e02ac-a7a9-471a-896f-0eae7e74f043": Phase="Running", Reason="", readiness=true. Elapsed: 4.160427011s Oct 14 14:51:27.038: INFO: Pod "busybox-privileged-false-004e02ac-a7a9-471a-896f-0eae7e74f043": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.168687925s Oct 14 14:51:27.039: INFO: Pod "busybox-privileged-false-004e02ac-a7a9-471a-896f-0eae7e74f043" satisfied condition "Succeeded or Failed" Oct 14 14:51:27.047: INFO: Got logs for pod "busybox-privileged-false-004e02ac-a7a9-471a-896f-0eae7e74f043": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:51:27.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2885" for this suite. • [SLOW TEST:6.288 seconds] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a pod with privileged /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:227 should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":176,"skipped":2603,"failed":0} S ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:51:27.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 14:51:27.217: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Oct 14 14:51:28.395: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:51:29.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9061" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":303,"completed":177,"skipped":2604,"failed":0} SSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:51:29.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 14 14:51:30.221: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9e547214-262c-4d31-88cb-4fc4b10dfa85" in namespace "downward-api-5154" to be "Succeeded or Failed" Oct 14 14:51:30.278: INFO: Pod "downwardapi-volume-9e547214-262c-4d31-88cb-4fc4b10dfa85": Phase="Pending", Reason="", readiness=false. Elapsed: 56.821418ms Oct 14 14:51:32.346: INFO: Pod "downwardapi-volume-9e547214-262c-4d31-88cb-4fc4b10dfa85": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125187096s Oct 14 14:51:34.353: INFO: Pod "downwardapi-volume-9e547214-262c-4d31-88cb-4fc4b10dfa85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.132649363s STEP: Saw pod success Oct 14 14:51:34.354: INFO: Pod "downwardapi-volume-9e547214-262c-4d31-88cb-4fc4b10dfa85" satisfied condition "Succeeded or Failed" Oct 14 14:51:34.358: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-9e547214-262c-4d31-88cb-4fc4b10dfa85 container client-container: STEP: delete the pod Oct 14 14:51:34.378: INFO: Waiting for pod downwardapi-volume-9e547214-262c-4d31-88cb-4fc4b10dfa85 to disappear Oct 14 14:51:34.399: INFO: Pod downwardapi-volume-9e547214-262c-4d31-88cb-4fc4b10dfa85 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:51:34.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5154" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":178,"skipped":2611,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:51:34.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD Oct 14 14:51:34.843: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:53:15.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7697" for this suite. • [SLOW TEST:101.093 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":303,"completed":179,"skipped":2614,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should delete a collection of pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:53:15.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should delete a collection of pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of pods Oct 14 14:53:15.595: INFO: created test-pod-1 Oct 14 14:53:15.651: INFO: created test-pod-2 Oct 14 14:53:15.684: INFO: created test-pod-3 STEP: waiting for all 3 pods to be located STEP: waiting for all pods to be deleted [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:53:15.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-331" for this suite. •{"msg":"PASSED [k8s.io] Pods should delete a collection of pods [Conformance]","total":303,"completed":180,"skipped":2644,"failed":0} SS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:53:15.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-7867 STEP: creating a selector STEP: Creating the service pods in kubernetes Oct 14 14:53:16.070: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Oct 14 14:53:16.136: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 14 14:53:18.145: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 14 14:53:20.145: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 14 14:53:22.145: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 14 14:53:24.145: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 14 14:53:26.146: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 14 14:53:28.144: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 14 14:53:30.144: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 14 14:53:32.576: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 14 14:53:34.144: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 14 14:53:36.145: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 14 14:53:38.144: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 14 14:53:40.144: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 14 14:53:42.161: INFO: The status of Pod netserver-0 is Running (Ready = true) Oct 14 14:53:42.171: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Oct 14 14:53:46.318: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.96 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7867 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 14 14:53:46.318: INFO: >>> kubeConfig: /root/.kube/config I1014 14:53:46.427628 11 log.go:181] (0x81b0150) (0x81b01c0) Create stream I1014 14:53:46.427949 11 log.go:181] (0x81b0150) (0x81b01c0) Stream added, broadcasting: 1 I1014 14:53:46.435334 11 log.go:181] (0x81b0150) Reply frame received for 1 I1014 14:53:46.435555 11 log.go:181] (0x81b0150) (0xa75c700) Create stream I1014 14:53:46.435633 11 log.go:181] (0x81b0150) (0xa75c700) Stream added, broadcasting: 3 I1014 14:53:46.437213 11 log.go:181] (0x81b0150) Reply frame received for 3 I1014 14:53:46.437373 11 log.go:181] (0x81b0150) (0x81b0380) Create stream I1014 14:53:46.437454 11 log.go:181] (0x81b0150) (0x81b0380) Stream added, broadcasting: 5 I1014 14:53:46.438800 11 log.go:181] (0x81b0150) Reply frame received for 5 I1014 14:53:47.519615 11 log.go:181] (0x81b0150) Data frame received for 3 I1014 14:53:47.519905 11 log.go:181] (0xa75c700) (3) Data frame handling I1014 14:53:47.520176 11 log.go:181] (0x81b0150) Data frame received for 5 I1014 14:53:47.520452 11 log.go:181] (0x81b0380) (5) Data frame handling I1014 14:53:47.520586 11 log.go:181] (0xa75c700) (3) Data frame sent I1014 14:53:47.520738 11 log.go:181] (0x81b0150) Data frame received for 3 I1014 14:53:47.520988 11 log.go:181] (0xa75c700) (3) Data frame handling I1014 14:53:47.521879 11 log.go:181] (0x81b0150) Data frame received for 1 I1014 14:53:47.522067 11 log.go:181] (0x81b01c0) (1) Data frame handling I1014 14:53:47.522251 11 log.go:181] (0x81b01c0) (1) Data frame sent I1014 14:53:47.522416 11 log.go:181] (0x81b0150) (0x81b01c0) Stream removed, broadcasting: 1 I1014 14:53:47.522608 11 log.go:181] (0x81b0150) Go away received I1014 14:53:47.523031 11 log.go:181] (0x81b0150) (0x81b01c0) Stream removed, broadcasting: 1 I1014 14:53:47.523303 11 log.go:181] (0x81b0150) (0xa75c700) Stream removed, broadcasting: 3 I1014 14:53:47.523468 11 log.go:181] (0x81b0150) (0x81b0380) Stream removed, broadcasting: 5 Oct 14 14:53:47.524: INFO: Found all expected endpoints: [netserver-0] Oct 14 14:53:47.532: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.188 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7867 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 14 14:53:47.532: INFO: >>> kubeConfig: /root/.kube/config I1014 14:53:47.638840 11 log.go:181] (0x81b0a80) (0x81b0af0) Create stream I1014 14:53:47.639025 11 log.go:181] (0x81b0a80) (0x81b0af0) Stream added, broadcasting: 1 I1014 14:53:47.645138 11 log.go:181] (0x81b0a80) Reply frame received for 1 I1014 14:53:47.645316 11 log.go:181] (0x81b0a80) (0xa404460) Create stream I1014 14:53:47.645407 11 log.go:181] (0x81b0a80) (0xa404460) Stream added, broadcasting: 3 I1014 14:53:47.646868 11 log.go:181] (0x81b0a80) Reply frame received for 3 I1014 14:53:47.647024 11 log.go:181] (0x81b0a80) (0xa404850) Create stream I1014 14:53:47.647108 11 log.go:181] (0x81b0a80) (0xa404850) Stream added, broadcasting: 5 I1014 14:53:47.648427 11 log.go:181] (0x81b0a80) Reply frame received for 5 I1014 14:53:48.734854 11 log.go:181] (0x81b0a80) Data frame received for 5 I1014 14:53:48.734992 11 log.go:181] (0xa404850) (5) Data frame handling I1014 14:53:48.735208 11 log.go:181] (0x81b0a80) Data frame received for 3 I1014 14:53:48.735417 11 log.go:181] (0xa404460) (3) Data frame handling I1014 14:53:48.735573 11 log.go:181] (0xa404460) (3) Data frame sent I1014 14:53:48.735722 11 log.go:181] (0x81b0a80) Data frame received for 3 I1014 14:53:48.735853 11 log.go:181] (0xa404460) (3) Data frame handling I1014 14:53:48.738468 11 log.go:181] (0x81b0a80) Data frame received for 1 I1014 14:53:48.738724 11 log.go:181] (0x81b0af0) (1) Data frame handling I1014 14:53:48.738934 11 log.go:181] (0x81b0af0) (1) Data frame sent I1014 14:53:48.739116 11 log.go:181] (0x81b0a80) (0x81b0af0) Stream removed, broadcasting: 1 I1014 14:53:48.739273 11 log.go:181] (0x81b0a80) Go away received I1014 14:53:48.739742 11 log.go:181] (0x81b0a80) (0x81b0af0) Stream removed, broadcasting: 1 I1014 14:53:48.739958 11 log.go:181] (0x81b0a80) (0xa404460) Stream removed, broadcasting: 3 I1014 14:53:48.740159 11 log.go:181] (0x81b0a80) (0xa404850) Stream removed, broadcasting: 5 Oct 14 14:53:48.740: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:53:48.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7867" for this suite. • [SLOW TEST:32.767 seconds] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":181,"skipped":2646,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:53:48.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl label /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1333 STEP: creating the pod Oct 14 14:53:48.841: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7705' Oct 14 14:53:54.053: INFO: stderr: "" Oct 14 14:53:54.053: INFO: stdout: "pod/pause created\n" Oct 14 14:53:54.053: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Oct 14 14:53:54.053: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-7705" to be "running and ready" Oct 14 14:53:54.072: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 18.310541ms Oct 14 14:53:56.080: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026038637s Oct 14 14:53:58.087: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033450658s Oct 14 14:54:00.096: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 6.041922127s Oct 14 14:54:00.096: INFO: Pod "pause" satisfied condition "running and ready" Oct 14 14:54:00.096: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: adding the label testing-label with value testing-label-value to a pod Oct 14 14:54:00.097: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-7705' Oct 14 14:54:01.511: INFO: stderr: "" Oct 14 14:54:01.511: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Oct 14 14:54:01.512: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-7705' Oct 14 14:54:02.746: INFO: stderr: "" Oct 14 14:54:02.746: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 8s testing-label-value\n" STEP: removing the label testing-label of a pod Oct 14 14:54:02.747: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-7705' Oct 14 14:54:04.023: INFO: stderr: "" Oct 14 14:54:04.024: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Oct 14 14:54:04.024: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-7705' Oct 14 14:54:05.261: INFO: stderr: "" Oct 14 14:54:05.261: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 11s \n" [AfterEach] Kubectl label /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1340 STEP: using delete to clean up resources Oct 14 14:54:05.262: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7705' Oct 14 14:54:06.492: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 14 14:54:06.492: INFO: stdout: "pod \"pause\" force deleted\n" Oct 14 14:54:06.493: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-7705' Oct 14 14:54:07.828: INFO: stderr: "No resources found in kubectl-7705 namespace.\n" Oct 14 14:54:07.829: INFO: stdout: "" Oct 14 14:54:07.829: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-7705 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Oct 14 14:54:09.210: INFO: stderr: "" Oct 14 14:54:09.210: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:54:09.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7705" for this suite. • [SLOW TEST:20.468 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1330 should update the label on a resource [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":303,"completed":182,"skipped":2673,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:54:09.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token STEP: reading a file in the container Oct 14 14:54:13.835: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9673 pod-service-account-ad83de49-485c-4659-85fd-0dc53cb50552 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Oct 14 14:54:15.360: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9673 pod-service-account-ad83de49-485c-4659-85fd-0dc53cb50552 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Oct 14 14:54:16.827: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9673 pod-service-account-ad83de49-485c-4659-85fd-0dc53cb50552 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:54:18.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9673" for this suite. • [SLOW TEST:9.115 seconds] [sig-auth] ServiceAccounts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":303,"completed":183,"skipped":2690,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:54:18.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Oct 14 14:54:22.981: INFO: Successfully updated pod "pod-update-60049c6e-3482-4832-b4e9-8c62b0c23f0e" STEP: verifying the updated pod is in kubernetes Oct 14 14:54:23.036: INFO: Pod update OK [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:54:23.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-75" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":303,"completed":184,"skipped":2723,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:54:23.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-9779eb37-a0c4-4c27-819b-ecbfdbabe675 STEP: Creating a pod to test consume secrets Oct 14 14:54:23.211: INFO: Waiting up to 5m0s for pod "pod-secrets-a9f5b32c-2f26-410d-8769-e61f210287cb" in namespace "secrets-7982" to be "Succeeded or Failed" Oct 14 14:54:23.234: INFO: Pod "pod-secrets-a9f5b32c-2f26-410d-8769-e61f210287cb": Phase="Pending", Reason="", readiness=false. Elapsed: 23.648673ms Oct 14 14:54:25.249: INFO: Pod "pod-secrets-a9f5b32c-2f26-410d-8769-e61f210287cb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038767963s Oct 14 14:54:27.258: INFO: Pod "pod-secrets-a9f5b32c-2f26-410d-8769-e61f210287cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047781645s STEP: Saw pod success Oct 14 14:54:27.259: INFO: Pod "pod-secrets-a9f5b32c-2f26-410d-8769-e61f210287cb" satisfied condition "Succeeded or Failed" Oct 14 14:54:27.265: INFO: Trying to get logs from node latest-worker pod pod-secrets-a9f5b32c-2f26-410d-8769-e61f210287cb container secret-env-test: STEP: delete the pod Oct 14 14:54:27.360: INFO: Waiting for pod pod-secrets-a9f5b32c-2f26-410d-8769-e61f210287cb to disappear Oct 14 14:54:27.414: INFO: Pod pod-secrets-a9f5b32c-2f26-410d-8769-e61f210287cb no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:54:27.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7982" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":303,"completed":185,"skipped":2758,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:54:27.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override command Oct 14 14:54:27.640: INFO: Waiting up to 5m0s for pod "client-containers-1c10a7f4-de7a-4563-b8e6-463e536b5a4c" in namespace "containers-2047" to be "Succeeded or Failed" Oct 14 14:54:27.676: INFO: Pod "client-containers-1c10a7f4-de7a-4563-b8e6-463e536b5a4c": Phase="Pending", Reason="", readiness=false. Elapsed: 35.716327ms Oct 14 14:54:29.757: INFO: Pod "client-containers-1c10a7f4-de7a-4563-b8e6-463e536b5a4c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11683811s Oct 14 14:54:31.765: INFO: Pod "client-containers-1c10a7f4-de7a-4563-b8e6-463e536b5a4c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.124727037s STEP: Saw pod success Oct 14 14:54:31.766: INFO: Pod "client-containers-1c10a7f4-de7a-4563-b8e6-463e536b5a4c" satisfied condition "Succeeded or Failed" Oct 14 14:54:31.789: INFO: Trying to get logs from node latest-worker pod client-containers-1c10a7f4-de7a-4563-b8e6-463e536b5a4c container test-container: STEP: delete the pod Oct 14 14:54:31.833: INFO: Waiting for pod client-containers-1c10a7f4-de7a-4563-b8e6-463e536b5a4c to disappear Oct 14 14:54:31.850: INFO: Pod client-containers-1c10a7f4-de7a-4563-b8e6-463e536b5a4c no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:54:31.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2047" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":303,"completed":186,"skipped":2820,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:54:31.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 14:54:31.960: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:54:36.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-546" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":303,"completed":187,"skipped":2862,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:54:36.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-2d783952-eb42-4ca0-8335-d9701380541f STEP: Creating a pod to test consume secrets Oct 14 14:54:36.374: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-76953848-18bf-46e1-a409-7408bc8b2382" in namespace "projected-53" to be "Succeeded or Failed" Oct 14 14:54:36.401: INFO: Pod "pod-projected-secrets-76953848-18bf-46e1-a409-7408bc8b2382": Phase="Pending", Reason="", readiness=false. Elapsed: 27.581262ms Oct 14 14:54:38.459: INFO: Pod "pod-projected-secrets-76953848-18bf-46e1-a409-7408bc8b2382": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085118439s Oct 14 14:54:40.466: INFO: Pod "pod-projected-secrets-76953848-18bf-46e1-a409-7408bc8b2382": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.092393198s STEP: Saw pod success Oct 14 14:54:40.467: INFO: Pod "pod-projected-secrets-76953848-18bf-46e1-a409-7408bc8b2382" satisfied condition "Succeeded or Failed" Oct 14 14:54:40.483: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-76953848-18bf-46e1-a409-7408bc8b2382 container projected-secret-volume-test: STEP: delete the pod Oct 14 14:54:40.558: INFO: Waiting for pod pod-projected-secrets-76953848-18bf-46e1-a409-7408bc8b2382 to disappear Oct 14 14:54:40.574: INFO: Pod pod-projected-secrets-76953848-18bf-46e1-a409-7408bc8b2382 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:54:40.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-53" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":188,"skipped":2929,"failed":0} SSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:54:40.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 14:54:40.762: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Oct 14 14:54:40.786: INFO: Number of nodes with available pods: 0 Oct 14 14:54:40.786: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Oct 14 14:54:40.899: INFO: Number of nodes with available pods: 0 Oct 14 14:54:40.899: INFO: Node latest-worker is running more than one daemon pod Oct 14 14:54:41.907: INFO: Number of nodes with available pods: 0 Oct 14 14:54:41.907: INFO: Node latest-worker is running more than one daemon pod Oct 14 14:54:42.907: INFO: Number of nodes with available pods: 0 Oct 14 14:54:42.907: INFO: Node latest-worker is running more than one daemon pod Oct 14 14:54:43.909: INFO: Number of nodes with available pods: 0 Oct 14 14:54:43.909: INFO: Node latest-worker is running more than one daemon pod Oct 14 14:54:44.907: INFO: Number of nodes with available pods: 1 Oct 14 14:54:44.907: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Oct 14 14:54:45.338: INFO: Number of nodes with available pods: 1 Oct 14 14:54:45.338: INFO: Number of running nodes: 0, number of available pods: 1 Oct 14 14:54:46.451: INFO: Number of nodes with available pods: 0 Oct 14 14:54:46.451: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Oct 14 14:54:46.581: INFO: Number of nodes with available pods: 0 Oct 14 14:54:46.581: INFO: Node latest-worker is running more than one daemon pod Oct 14 14:54:47.588: INFO: Number of nodes with available pods: 0 Oct 14 14:54:47.588: INFO: Node latest-worker is running more than one daemon pod Oct 14 14:54:48.590: INFO: Number of nodes with available pods: 0 Oct 14 14:54:48.590: INFO: Node latest-worker is running more than one daemon pod Oct 14 14:54:49.589: INFO: Number of nodes with available pods: 0 Oct 14 14:54:49.589: INFO: Node latest-worker is running more than one daemon pod Oct 14 14:54:50.593: INFO: Number of nodes with available pods: 0 Oct 14 14:54:50.593: INFO: Node latest-worker is running more than one daemon pod Oct 14 14:54:51.590: INFO: Number of nodes with available pods: 0 Oct 14 14:54:51.590: INFO: Node latest-worker is running more than one daemon pod Oct 14 14:54:52.599: INFO: Number of nodes with available pods: 0 Oct 14 14:54:52.600: INFO: Node latest-worker is running more than one daemon pod Oct 14 14:54:53.589: INFO: Number of nodes with available pods: 0 Oct 14 14:54:53.589: INFO: Node latest-worker is running more than one daemon pod Oct 14 14:54:54.589: INFO: Number of nodes with available pods: 0 Oct 14 14:54:54.589: INFO: Node latest-worker is running more than one daemon pod Oct 14 14:54:55.589: INFO: Number of nodes with available pods: 0 Oct 14 14:54:55.589: INFO: Node latest-worker is running more than one daemon pod Oct 14 14:54:56.593: INFO: Number of nodes with available pods: 0 Oct 14 14:54:56.593: INFO: Node latest-worker is running more than one daemon pod Oct 14 14:54:57.589: INFO: Number of nodes with available pods: 0 Oct 14 14:54:57.589: INFO: Node latest-worker is running more than one daemon pod Oct 14 14:54:58.605: INFO: Number of nodes with available pods: 0 Oct 14 14:54:58.605: INFO: Node latest-worker is running more than one daemon pod Oct 14 14:54:59.589: INFO: Number of nodes with available pods: 1 Oct 14 14:54:59.589: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8987, will wait for the garbage collector to delete the pods Oct 14 14:54:59.667: INFO: Deleting DaemonSet.extensions daemon-set took: 10.10158ms Oct 14 14:55:00.069: INFO: Terminating DaemonSet.extensions daemon-set pods took: 401.314308ms Oct 14 14:55:05.674: INFO: Number of nodes with available pods: 0 Oct 14 14:55:05.674: INFO: Number of running nodes: 0, number of available pods: 0 Oct 14 14:55:05.678: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8987/daemonsets","resourceVersion":"1148977"},"items":null} Oct 14 14:55:05.682: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8987/pods","resourceVersion":"1148977"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:55:05.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8987" for this suite. • [SLOW TEST:25.150 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":303,"completed":189,"skipped":2933,"failed":0} SS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:55:05.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's command Oct 14 14:55:05.824: INFO: Waiting up to 5m0s for pod "var-expansion-b24e2390-0253-4ec8-80b9-988931cead61" in namespace "var-expansion-6415" to be "Succeeded or Failed" Oct 14 14:55:05.834: INFO: Pod "var-expansion-b24e2390-0253-4ec8-80b9-988931cead61": Phase="Pending", Reason="", readiness=false. Elapsed: 9.789921ms Oct 14 14:55:07.842: INFO: Pod "var-expansion-b24e2390-0253-4ec8-80b9-988931cead61": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017746303s Oct 14 14:55:09.849: INFO: Pod "var-expansion-b24e2390-0253-4ec8-80b9-988931cead61": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025415867s STEP: Saw pod success Oct 14 14:55:09.850: INFO: Pod "var-expansion-b24e2390-0253-4ec8-80b9-988931cead61" satisfied condition "Succeeded or Failed" Oct 14 14:55:09.854: INFO: Trying to get logs from node latest-worker pod var-expansion-b24e2390-0253-4ec8-80b9-988931cead61 container dapi-container: STEP: delete the pod Oct 14 14:55:09.935: INFO: Waiting for pod var-expansion-b24e2390-0253-4ec8-80b9-988931cead61 to disappear Oct 14 14:55:09.941: INFO: Pod var-expansion-b24e2390-0253-4ec8-80b9-988931cead61 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:55:09.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6415" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":303,"completed":190,"skipped":2935,"failed":0} SS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:55:09.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-4467 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-4467 STEP: creating replication controller externalsvc in namespace services-4467 I1014 14:55:10.210641 11 runners.go:190] Created replication controller with name: externalsvc, namespace: services-4467, replica count: 2 I1014 14:55:13.262643 11 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1014 14:55:16.263470 11 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Oct 14 14:55:16.325: INFO: Creating new exec pod Oct 14 14:55:20.353: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-4467 execpod5rmrt -- /bin/sh -x -c nslookup clusterip-service.services-4467.svc.cluster.local' Oct 14 14:55:21.861: INFO: stderr: "I1014 14:55:21.702723 2937 log.go:181] (0x247c700) (0x247cee0) Create stream\nI1014 14:55:21.707006 2937 log.go:181] (0x247c700) (0x247cee0) Stream added, broadcasting: 1\nI1014 14:55:21.731681 2937 log.go:181] (0x247c700) Reply frame received for 1\nI1014 14:55:21.732349 2937 log.go:181] (0x247c700) (0x25cca80) Create stream\nI1014 14:55:21.732435 2937 log.go:181] (0x247c700) (0x25cca80) Stream added, broadcasting: 3\nI1014 14:55:21.734644 2937 log.go:181] (0x247c700) Reply frame received for 3\nI1014 14:55:21.734933 2937 log.go:181] (0x247c700) (0x28b4070) Create stream\nI1014 14:55:21.735016 2937 log.go:181] (0x247c700) (0x28b4070) Stream added, broadcasting: 5\nI1014 14:55:21.736032 2937 log.go:181] (0x247c700) Reply frame received for 5\nI1014 14:55:21.828803 2937 log.go:181] (0x247c700) Data frame received for 5\nI1014 14:55:21.829099 2937 log.go:181] (0x28b4070) (5) Data frame handling\nI1014 14:55:21.829500 2937 log.go:181] (0x28b4070) (5) Data frame sent\n+ nslookup clusterip-service.services-4467.svc.cluster.local\nI1014 14:55:21.839591 2937 log.go:181] (0x247c700) Data frame received for 3\nI1014 14:55:21.839714 2937 log.go:181] (0x25cca80) (3) Data frame handling\nI1014 14:55:21.839832 2937 log.go:181] (0x25cca80) (3) Data frame sent\nI1014 14:55:21.840433 2937 log.go:181] (0x247c700) Data frame received for 3\nI1014 14:55:21.840562 2937 log.go:181] (0x25cca80) (3) Data frame handling\nI1014 14:55:21.840690 2937 log.go:181] (0x25cca80) (3) Data frame sent\nI1014 14:55:21.840831 2937 log.go:181] (0x247c700) Data frame received for 5\nI1014 14:55:21.841018 2937 log.go:181] (0x28b4070) (5) Data frame handling\nI1014 14:55:21.841162 2937 log.go:181] (0x247c700) Data frame received for 3\nI1014 14:55:21.841264 2937 log.go:181] (0x25cca80) (3) Data frame handling\nI1014 14:55:21.843154 2937 log.go:181] (0x247c700) Data frame received for 1\nI1014 14:55:21.843346 2937 log.go:181] (0x247cee0) (1) Data frame handling\nI1014 14:55:21.843536 2937 log.go:181] (0x247cee0) (1) Data frame sent\nI1014 14:55:21.844170 2937 log.go:181] (0x247c700) (0x247cee0) Stream removed, broadcasting: 1\nI1014 14:55:21.847218 2937 log.go:181] (0x247c700) Go away received\nI1014 14:55:21.849984 2937 log.go:181] (0x247c700) (0x247cee0) Stream removed, broadcasting: 1\nI1014 14:55:21.850352 2937 log.go:181] (0x247c700) (0x25cca80) Stream removed, broadcasting: 3\nI1014 14:55:21.850576 2937 log.go:181] (0x247c700) (0x28b4070) Stream removed, broadcasting: 5\n" Oct 14 14:55:21.862: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-4467.svc.cluster.local\tcanonical name = externalsvc.services-4467.svc.cluster.local.\nName:\texternalsvc.services-4467.svc.cluster.local\nAddress: 10.105.115.103\n\n" STEP: deleting ReplicationController externalsvc in namespace services-4467, will wait for the garbage collector to delete the pods Oct 14 14:55:21.930: INFO: Deleting ReplicationController externalsvc took: 9.414519ms Oct 14 14:55:22.331: INFO: Terminating ReplicationController externalsvc pods took: 400.66471ms Oct 14 14:55:35.757: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:55:35.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4467" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:25.834 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":303,"completed":191,"skipped":2937,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:55:35.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Oct 14 14:55:42.398: INFO: Successfully updated pod "adopt-release-lvwcw" STEP: Checking that the Job readopts the Pod Oct 14 14:55:42.399: INFO: Waiting up to 15m0s for pod "adopt-release-lvwcw" in namespace "job-6967" to be "adopted" Oct 14 14:55:42.413: INFO: Pod "adopt-release-lvwcw": Phase="Running", Reason="", readiness=true. Elapsed: 14.179293ms Oct 14 14:55:44.421: INFO: Pod "adopt-release-lvwcw": Phase="Running", Reason="", readiness=true. Elapsed: 2.022277817s Oct 14 14:55:44.421: INFO: Pod "adopt-release-lvwcw" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Oct 14 14:55:44.939: INFO: Successfully updated pod "adopt-release-lvwcw" STEP: Checking that the Job releases the Pod Oct 14 14:55:44.940: INFO: Waiting up to 15m0s for pod "adopt-release-lvwcw" in namespace "job-6967" to be "released" Oct 14 14:55:44.958: INFO: Pod "adopt-release-lvwcw": Phase="Running", Reason="", readiness=true. Elapsed: 17.679907ms Oct 14 14:55:47.000: INFO: Pod "adopt-release-lvwcw": Phase="Running", Reason="", readiness=true. Elapsed: 2.060326852s Oct 14 14:55:47.001: INFO: Pod "adopt-release-lvwcw" satisfied condition "released" [AfterEach] [sig-apps] Job /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:55:47.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-6967" for this suite. • [SLOW TEST:11.224 seconds] [sig-apps] Job /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":303,"completed":192,"skipped":2960,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:55:47.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 14 14:55:59.297: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 14 14:56:01.565: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738284159, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738284159, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738284159, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738284159, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 14 14:56:04.644: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:56:04.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1170" for this suite. STEP: Destroying namespace "webhook-1170-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.070 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":303,"completed":193,"skipped":2995,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:56:05.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:56:05.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3264" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":303,"completed":194,"skipped":3037,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:56:05.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 14 14:56:06.151: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f3f72db1-3a72-4e3b-9181-b1528a468b7f" in namespace "downward-api-6503" to be "Succeeded or Failed" Oct 14 14:56:06.215: INFO: Pod "downwardapi-volume-f3f72db1-3a72-4e3b-9181-b1528a468b7f": Phase="Pending", Reason="", readiness=false. Elapsed: 63.954356ms Oct 14 14:56:08.223: INFO: Pod "downwardapi-volume-f3f72db1-3a72-4e3b-9181-b1528a468b7f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072140406s Oct 14 14:56:10.229: INFO: Pod "downwardapi-volume-f3f72db1-3a72-4e3b-9181-b1528a468b7f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078205509s Oct 14 14:56:12.239: INFO: Pod "downwardapi-volume-f3f72db1-3a72-4e3b-9181-b1528a468b7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.087871948s STEP: Saw pod success Oct 14 14:56:12.239: INFO: Pod "downwardapi-volume-f3f72db1-3a72-4e3b-9181-b1528a468b7f" satisfied condition "Succeeded or Failed" Oct 14 14:56:12.245: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-f3f72db1-3a72-4e3b-9181-b1528a468b7f container client-container: STEP: delete the pod Oct 14 14:56:12.271: INFO: Waiting for pod downwardapi-volume-f3f72db1-3a72-4e3b-9181-b1528a468b7f to disappear Oct 14 14:56:12.286: INFO: Pod downwardapi-volume-f3f72db1-3a72-4e3b-9181-b1528a468b7f no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:56:12.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6503" for this suite. • [SLOW TEST:6.559 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":195,"skipped":3064,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:56:12.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Oct 14 14:56:12.457: INFO: Created pod &Pod{ObjectMeta:{dns-3165 dns-3165 /api/v1/namespaces/dns-3165/pods/dns-3165 c5870ff1-dd9f-49de-9ecd-fd64dceed40c 1149479 0 2020-10-14 14:56:12 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2020-10-14 14:56:12 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fkrs4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fkrs4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fkrs4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 14:56:12.486: INFO: The status of Pod dns-3165 is Pending, waiting for it to be Running (with Ready = true) Oct 14 14:56:14.497: INFO: The status of Pod dns-3165 is Pending, waiting for it to be Running (with Ready = true) Oct 14 14:56:16.496: INFO: The status of Pod dns-3165 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Oct 14 14:56:16.497: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-3165 PodName:dns-3165 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 14 14:56:16.497: INFO: >>> kubeConfig: /root/.kube/config I1014 14:56:16.611196 11 log.go:181] (0x7840af0) (0x7840cb0) Create stream I1014 14:56:16.611526 11 log.go:181] (0x7840af0) (0x7840cb0) Stream added, broadcasting: 1 I1014 14:56:16.616419 11 log.go:181] (0x7840af0) Reply frame received for 1 I1014 14:56:16.616661 11 log.go:181] (0x7840af0) (0x9c6de30) Create stream I1014 14:56:16.616777 11 log.go:181] (0x7840af0) (0x9c6de30) Stream added, broadcasting: 3 I1014 14:56:16.618611 11 log.go:181] (0x7840af0) Reply frame received for 3 I1014 14:56:16.618749 11 log.go:181] (0x7840af0) (0x811e000) Create stream I1014 14:56:16.618830 11 log.go:181] (0x7840af0) (0x811e000) Stream added, broadcasting: 5 I1014 14:56:16.620016 11 log.go:181] (0x7840af0) Reply frame received for 5 I1014 14:56:16.706686 11 log.go:181] (0x7840af0) Data frame received for 3 I1014 14:56:16.706857 11 log.go:181] (0x9c6de30) (3) Data frame handling I1014 14:56:16.706995 11 log.go:181] (0x9c6de30) (3) Data frame sent I1014 14:56:16.707117 11 log.go:181] (0x7840af0) Data frame received for 3 I1014 14:56:16.707298 11 log.go:181] (0x7840af0) Data frame received for 5 I1014 14:56:16.707498 11 log.go:181] (0x811e000) (5) Data frame handling I1014 14:56:16.707635 11 log.go:181] (0x9c6de30) (3) Data frame handling I1014 14:56:16.710566 11 log.go:181] (0x7840af0) Data frame received for 1 I1014 14:56:16.710667 11 log.go:181] (0x7840cb0) (1) Data frame handling I1014 14:56:16.710826 11 log.go:181] (0x7840cb0) (1) Data frame sent I1014 14:56:16.710970 11 log.go:181] (0x7840af0) (0x7840cb0) Stream removed, broadcasting: 1 I1014 14:56:16.711114 11 log.go:181] (0x7840af0) Go away received I1014 14:56:16.711680 11 log.go:181] (0x7840af0) (0x7840cb0) Stream removed, broadcasting: 1 I1014 14:56:16.711830 11 log.go:181] (0x7840af0) (0x9c6de30) Stream removed, broadcasting: 3 I1014 14:56:16.711976 11 log.go:181] (0x7840af0) (0x811e000) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Oct 14 14:56:16.712: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-3165 PodName:dns-3165 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 14 14:56:16.712: INFO: >>> kubeConfig: /root/.kube/config I1014 14:56:16.841243 11 log.go:181] (0xa114e00) (0xa114ee0) Create stream I1014 14:56:16.841414 11 log.go:181] (0xa114e00) (0xa114ee0) Stream added, broadcasting: 1 I1014 14:56:16.844606 11 log.go:181] (0xa114e00) Reply frame received for 1 I1014 14:56:16.844762 11 log.go:181] (0xa114e00) (0x7841b90) Create stream I1014 14:56:16.844914 11 log.go:181] (0xa114e00) (0x7841b90) Stream added, broadcasting: 3 I1014 14:56:16.846226 11 log.go:181] (0xa114e00) Reply frame received for 3 I1014 14:56:16.846390 11 log.go:181] (0xa114e00) (0x92d0310) Create stream I1014 14:56:16.846456 11 log.go:181] (0xa114e00) (0x92d0310) Stream added, broadcasting: 5 I1014 14:56:16.847687 11 log.go:181] (0xa114e00) Reply frame received for 5 I1014 14:56:16.921601 11 log.go:181] (0xa114e00) Data frame received for 3 I1014 14:56:16.921759 11 log.go:181] (0x7841b90) (3) Data frame handling I1014 14:56:16.921896 11 log.go:181] (0x7841b90) (3) Data frame sent I1014 14:56:16.922493 11 log.go:181] (0xa114e00) Data frame received for 5 I1014 14:56:16.922686 11 log.go:181] (0x92d0310) (5) Data frame handling I1014 14:56:16.922840 11 log.go:181] (0xa114e00) Data frame received for 3 I1014 14:56:16.922994 11 log.go:181] (0x7841b90) (3) Data frame handling I1014 14:56:16.924476 11 log.go:181] (0xa114e00) Data frame received for 1 I1014 14:56:16.924591 11 log.go:181] (0xa114ee0) (1) Data frame handling I1014 14:56:16.924702 11 log.go:181] (0xa114ee0) (1) Data frame sent I1014 14:56:16.924812 11 log.go:181] (0xa114e00) (0xa114ee0) Stream removed, broadcasting: 1 I1014 14:56:16.925112 11 log.go:181] (0xa114e00) Go away received I1014 14:56:16.925524 11 log.go:181] (0xa114e00) (0xa114ee0) Stream removed, broadcasting: 1 I1014 14:56:16.925653 11 log.go:181] (0xa114e00) (0x7841b90) Stream removed, broadcasting: 3 I1014 14:56:16.925763 11 log.go:181] (0xa114e00) (0x92d0310) Stream removed, broadcasting: 5 Oct 14 14:56:16.926: INFO: Deleting pod dns-3165... [AfterEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:56:16.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3165" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":303,"completed":196,"skipped":3084,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:56:16.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Oct 14 14:56:18.602: INFO: Pod name wrapped-volume-race-71d7b44d-418f-4598-93ad-77feca5538d1: Found 0 pods out of 5 Oct 14 14:56:24.110: INFO: Pod name wrapped-volume-race-71d7b44d-418f-4598-93ad-77feca5538d1: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-71d7b44d-418f-4598-93ad-77feca5538d1 in namespace emptydir-wrapper-5239, will wait for the garbage collector to delete the pods Oct 14 14:56:38.398: INFO: Deleting ReplicationController wrapped-volume-race-71d7b44d-418f-4598-93ad-77feca5538d1 took: 10.899526ms Oct 14 14:56:38.798: INFO: Terminating ReplicationController wrapped-volume-race-71d7b44d-418f-4598-93ad-77feca5538d1 pods took: 400.730818ms STEP: Creating RC which spawns configmap-volume pods Oct 14 14:56:55.962: INFO: Pod name wrapped-volume-race-11a7c05c-2073-47ad-86b1-eac43ab2b0bd: Found 0 pods out of 5 Oct 14 14:57:00.989: INFO: Pod name wrapped-volume-race-11a7c05c-2073-47ad-86b1-eac43ab2b0bd: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-11a7c05c-2073-47ad-86b1-eac43ab2b0bd in namespace emptydir-wrapper-5239, will wait for the garbage collector to delete the pods Oct 14 14:57:15.186: INFO: Deleting ReplicationController wrapped-volume-race-11a7c05c-2073-47ad-86b1-eac43ab2b0bd took: 9.864231ms Oct 14 14:57:15.687: INFO: Terminating ReplicationController wrapped-volume-race-11a7c05c-2073-47ad-86b1-eac43ab2b0bd pods took: 500.872556ms STEP: Creating RC which spawns configmap-volume pods Oct 14 14:57:25.958: INFO: Pod name wrapped-volume-race-ad8385ea-b468-4d8a-834c-24fc4d8d0f8b: Found 1 pods out of 5 Oct 14 14:57:30.981: INFO: Pod name wrapped-volume-race-ad8385ea-b468-4d8a-834c-24fc4d8d0f8b: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-ad8385ea-b468-4d8a-834c-24fc4d8d0f8b in namespace emptydir-wrapper-5239, will wait for the garbage collector to delete the pods Oct 14 14:57:47.109: INFO: Deleting ReplicationController wrapped-volume-race-ad8385ea-b468-4d8a-834c-24fc4d8d0f8b took: 9.386589ms Oct 14 14:57:47.610: INFO: Terminating ReplicationController wrapped-volume-race-ad8385ea-b468-4d8a-834c-24fc4d8d0f8b pods took: 500.786243ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:57:56.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-5239" for this suite. • [SLOW TEST:99.380 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":303,"completed":197,"skipped":3103,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:57:56.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating cluster-info Oct 14 14:57:56.461: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config cluster-info' Oct 14 14:57:57.676: INFO: stderr: "" Oct 14 14:57:57.676: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:34323\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:34323/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:57:57.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5019" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":303,"completed":198,"skipped":3115,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:57:57.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs Oct 14 14:57:57.885: INFO: Waiting up to 5m0s for pod "pod-28d62b60-67a5-4cdf-8bce-4439e28981f5" in namespace "emptydir-5012" to be "Succeeded or Failed" Oct 14 14:57:57.897: INFO: Pod "pod-28d62b60-67a5-4cdf-8bce-4439e28981f5": Phase="Pending", Reason="", readiness=false. Elapsed: 11.72225ms Oct 14 14:57:59.904: INFO: Pod "pod-28d62b60-67a5-4cdf-8bce-4439e28981f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0183106s Oct 14 14:58:01.934: INFO: Pod "pod-28d62b60-67a5-4cdf-8bce-4439e28981f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04864132s STEP: Saw pod success Oct 14 14:58:01.934: INFO: Pod "pod-28d62b60-67a5-4cdf-8bce-4439e28981f5" satisfied condition "Succeeded or Failed" Oct 14 14:58:01.951: INFO: Trying to get logs from node latest-worker pod pod-28d62b60-67a5-4cdf-8bce-4439e28981f5 container test-container: STEP: delete the pod Oct 14 14:58:02.019: INFO: Waiting for pod pod-28d62b60-67a5-4cdf-8bce-4439e28981f5 to disappear Oct 14 14:58:02.025: INFO: Pod pod-28d62b60-67a5-4cdf-8bce-4439e28981f5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:58:02.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5012" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":199,"skipped":3117,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:58:02.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 14:58:02.176: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:58:02.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2143" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":303,"completed":200,"skipped":3124,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Lease /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:58:02.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Lease /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:58:03.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-9533" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":303,"completed":201,"skipped":3154,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:58:03.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-34778c3a-aa41-4ed7-8f10-f7371e449b51 STEP: Creating a pod to test consume secrets Oct 14 14:58:03.418: INFO: Waiting up to 5m0s for pod "pod-secrets-27a263d7-a0fe-43de-9ba8-524f7e1d84fb" in namespace "secrets-9540" to be "Succeeded or Failed" Oct 14 14:58:03.494: INFO: Pod "pod-secrets-27a263d7-a0fe-43de-9ba8-524f7e1d84fb": Phase="Pending", Reason="", readiness=false. Elapsed: 75.455369ms Oct 14 14:58:05.542: INFO: Pod "pod-secrets-27a263d7-a0fe-43de-9ba8-524f7e1d84fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123632509s Oct 14 14:58:07.569: INFO: Pod "pod-secrets-27a263d7-a0fe-43de-9ba8-524f7e1d84fb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.151279007s Oct 14 14:58:09.579: INFO: Pod "pod-secrets-27a263d7-a0fe-43de-9ba8-524f7e1d84fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.161036289s STEP: Saw pod success Oct 14 14:58:09.580: INFO: Pod "pod-secrets-27a263d7-a0fe-43de-9ba8-524f7e1d84fb" satisfied condition "Succeeded or Failed" Oct 14 14:58:09.586: INFO: Trying to get logs from node latest-worker pod pod-secrets-27a263d7-a0fe-43de-9ba8-524f7e1d84fb container secret-volume-test: STEP: delete the pod Oct 14 14:58:09.624: INFO: Waiting for pod pod-secrets-27a263d7-a0fe-43de-9ba8-524f7e1d84fb to disappear Oct 14 14:58:09.665: INFO: Pod pod-secrets-27a263d7-a0fe-43de-9ba8-524f7e1d84fb no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:58:09.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9540" for this suite. • [SLOW TEST:6.438 seconds] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":202,"skipped":3173,"failed":0} SS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:58:09.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:58:13.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6480" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":303,"completed":203,"skipped":3175,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:58:13.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:58:30.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5982" for this suite. • [SLOW TEST:16.292 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":303,"completed":204,"skipped":3222,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:58:30.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 14 14:58:30.228: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d0988d68-dc76-4f23-82d9-74fca77060e5" in namespace "downward-api-285" to be "Succeeded or Failed" Oct 14 14:58:30.253: INFO: Pod "downwardapi-volume-d0988d68-dc76-4f23-82d9-74fca77060e5": Phase="Pending", Reason="", readiness=false. Elapsed: 24.826395ms Oct 14 14:58:32.263: INFO: Pod "downwardapi-volume-d0988d68-dc76-4f23-82d9-74fca77060e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034256192s Oct 14 14:58:34.271: INFO: Pod "downwardapi-volume-d0988d68-dc76-4f23-82d9-74fca77060e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042536005s STEP: Saw pod success Oct 14 14:58:34.271: INFO: Pod "downwardapi-volume-d0988d68-dc76-4f23-82d9-74fca77060e5" satisfied condition "Succeeded or Failed" Oct 14 14:58:34.278: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-d0988d68-dc76-4f23-82d9-74fca77060e5 container client-container: STEP: delete the pod Oct 14 14:58:34.328: INFO: Waiting for pod downwardapi-volume-d0988d68-dc76-4f23-82d9-74fca77060e5 to disappear Oct 14 14:58:34.342: INFO: Pod downwardapi-volume-d0988d68-dc76-4f23-82d9-74fca77060e5 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:58:34.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-285" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":303,"completed":205,"skipped":3303,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:58:34.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Oct 14 14:58:34.965: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2218 /api/v1/namespaces/watch-2218/configmaps/e2e-watch-test-label-changed b32a59e9-ec75-448f-889a-727991387db6 1150925 0 2020-10-14 14:58:34 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-10-14 14:58:34 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Oct 14 14:58:34.966: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2218 /api/v1/namespaces/watch-2218/configmaps/e2e-watch-test-label-changed b32a59e9-ec75-448f-889a-727991387db6 1150927 0 2020-10-14 14:58:34 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-10-14 14:58:34 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 14 14:58:34.966: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2218 /api/v1/namespaces/watch-2218/configmaps/e2e-watch-test-label-changed b32a59e9-ec75-448f-889a-727991387db6 1150929 0 2020-10-14 14:58:34 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-10-14 14:58:34 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Oct 14 14:58:45.124: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2218 /api/v1/namespaces/watch-2218/configmaps/e2e-watch-test-label-changed b32a59e9-ec75-448f-889a-727991387db6 1150976 0 2020-10-14 14:58:34 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-10-14 14:58:44 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 14 14:58:45.126: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2218 /api/v1/namespaces/watch-2218/configmaps/e2e-watch-test-label-changed b32a59e9-ec75-448f-889a-727991387db6 1150977 0 2020-10-14 14:58:34 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-10-14 14:58:45 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 14 14:58:45.127: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2218 /api/v1/namespaces/watch-2218/configmaps/e2e-watch-test-label-changed b32a59e9-ec75-448f-889a-727991387db6 1150978 0 2020-10-14 14:58:34 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-10-14 14:58:45 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:58:45.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2218" for this suite. • [SLOW TEST:10.800 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":303,"completed":206,"skipped":3322,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:58:45.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Oct 14 14:58:45.253: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-999 /api/v1/namespaces/watch-999/configmaps/e2e-watch-test-configmap-a 315945dd-7e0c-4b73-9973-e4e624fa09a6 1150986 0 2020-10-14 14:58:45 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-10-14 14:58:45 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Oct 14 14:58:45.255: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-999 /api/v1/namespaces/watch-999/configmaps/e2e-watch-test-configmap-a 315945dd-7e0c-4b73-9973-e4e624fa09a6 1150986 0 2020-10-14 14:58:45 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-10-14 14:58:45 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Oct 14 14:58:55.272: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-999 /api/v1/namespaces/watch-999/configmaps/e2e-watch-test-configmap-a 315945dd-7e0c-4b73-9973-e4e624fa09a6 1151019 0 2020-10-14 14:58:45 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-10-14 14:58:55 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 14 14:58:55.273: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-999 /api/v1/namespaces/watch-999/configmaps/e2e-watch-test-configmap-a 315945dd-7e0c-4b73-9973-e4e624fa09a6 1151019 0 2020-10-14 14:58:45 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-10-14 14:58:55 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Oct 14 14:59:05.291: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-999 /api/v1/namespaces/watch-999/configmaps/e2e-watch-test-configmap-a 315945dd-7e0c-4b73-9973-e4e624fa09a6 1151049 0 2020-10-14 14:58:45 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-10-14 14:59:05 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 14 14:59:05.293: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-999 /api/v1/namespaces/watch-999/configmaps/e2e-watch-test-configmap-a 315945dd-7e0c-4b73-9973-e4e624fa09a6 1151049 0 2020-10-14 14:58:45 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-10-14 14:59:05 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Oct 14 14:59:15.307: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-999 /api/v1/namespaces/watch-999/configmaps/e2e-watch-test-configmap-a 315945dd-7e0c-4b73-9973-e4e624fa09a6 1151079 0 2020-10-14 14:58:45 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-10-14 14:59:05 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 14 14:59:15.308: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-999 /api/v1/namespaces/watch-999/configmaps/e2e-watch-test-configmap-a 315945dd-7e0c-4b73-9973-e4e624fa09a6 1151079 0 2020-10-14 14:58:45 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-10-14 14:59:05 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Oct 14 14:59:25.322: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-999 /api/v1/namespaces/watch-999/configmaps/e2e-watch-test-configmap-b 8678dff7-d0dc-4dee-8fd4-e9ae15e2298a 1151109 0 2020-10-14 14:59:25 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-10-14 14:59:25 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Oct 14 14:59:25.323: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-999 /api/v1/namespaces/watch-999/configmaps/e2e-watch-test-configmap-b 8678dff7-d0dc-4dee-8fd4-e9ae15e2298a 1151109 0 2020-10-14 14:59:25 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-10-14 14:59:25 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Oct 14 14:59:35.336: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-999 /api/v1/namespaces/watch-999/configmaps/e2e-watch-test-configmap-b 8678dff7-d0dc-4dee-8fd4-e9ae15e2298a 1151139 0 2020-10-14 14:59:25 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-10-14 14:59:25 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Oct 14 14:59:35.337: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-999 /api/v1/namespaces/watch-999/configmaps/e2e-watch-test-configmap-b 8678dff7-d0dc-4dee-8fd4-e9ae15e2298a 1151139 0 2020-10-14 14:59:25 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-10-14 14:59:25 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 14:59:45.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-999" for this suite. • [SLOW TEST:60.229 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":303,"completed":207,"skipped":3331,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 14:59:45.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-9718 STEP: creating a selector STEP: Creating the service pods in kubernetes Oct 14 14:59:45.452: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Oct 14 14:59:45.584: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 14 14:59:47.625: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 14 14:59:49.593: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 14 14:59:51.594: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 14 14:59:53.596: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 14 14:59:55.594: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 14 14:59:57.594: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 14 14:59:59.593: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 14 15:00:01.592: INFO: The status of Pod netserver-0 is Running (Ready = true) Oct 14 15:00:01.602: INFO: The status of Pod netserver-1 is Running (Ready = false) Oct 14 15:00:03.609: INFO: The status of Pod netserver-1 is Running (Ready = false) Oct 14 15:00:05.610: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Oct 14 15:00:11.753: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.125:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-9718 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 14 15:00:11.753: INFO: >>> kubeConfig: /root/.kube/config I1014 15:00:11.866609 11 log.go:181] (0xa405030) (0xa4051f0) Create stream I1014 15:00:11.866784 11 log.go:181] (0xa405030) (0xa4051f0) Stream added, broadcasting: 1 I1014 15:00:11.871111 11 log.go:181] (0xa405030) Reply frame received for 1 I1014 15:00:11.871334 11 log.go:181] (0xa405030) (0xa405960) Create stream I1014 15:00:11.871453 11 log.go:181] (0xa405030) (0xa405960) Stream added, broadcasting: 3 I1014 15:00:11.873450 11 log.go:181] (0xa405030) Reply frame received for 3 I1014 15:00:11.873700 11 log.go:181] (0xa405030) (0xb58c0e0) Create stream I1014 15:00:11.873846 11 log.go:181] (0xa405030) (0xb58c0e0) Stream added, broadcasting: 5 I1014 15:00:11.875640 11 log.go:181] (0xa405030) Reply frame received for 5 I1014 15:00:11.955967 11 log.go:181] (0xa405030) Data frame received for 5 I1014 15:00:11.956124 11 log.go:181] (0xb58c0e0) (5) Data frame handling I1014 15:00:11.956242 11 log.go:181] (0xa405030) Data frame received for 3 I1014 15:00:11.956374 11 log.go:181] (0xa405960) (3) Data frame handling I1014 15:00:11.956531 11 log.go:181] (0xa405960) (3) Data frame sent I1014 15:00:11.956657 11 log.go:181] (0xa405030) Data frame received for 3 I1014 15:00:11.956780 11 log.go:181] (0xa405960) (3) Data frame handling I1014 15:00:11.958758 11 log.go:181] (0xa405030) Data frame received for 1 I1014 15:00:11.958856 11 log.go:181] (0xa4051f0) (1) Data frame handling I1014 15:00:11.958976 11 log.go:181] (0xa4051f0) (1) Data frame sent I1014 15:00:11.959089 11 log.go:181] (0xa405030) (0xa4051f0) Stream removed, broadcasting: 1 I1014 15:00:11.959175 11 log.go:181] (0xa405030) Go away received I1014 15:00:11.959451 11 log.go:181] (0xa405030) (0xa4051f0) Stream removed, broadcasting: 1 I1014 15:00:11.959543 11 log.go:181] (0xa405030) (0xa405960) Stream removed, broadcasting: 3 I1014 15:00:11.959624 11 log.go:181] (0xa405030) (0xb58c0e0) Stream removed, broadcasting: 5 Oct 14 15:00:11.959: INFO: Found all expected endpoints: [netserver-0] Oct 14 15:00:11.964: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.200:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-9718 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 14 15:00:11.964: INFO: >>> kubeConfig: /root/.kube/config I1014 15:00:12.072211 11 log.go:181] (0xa0d59d0) (0x9820000) Create stream I1014 15:00:12.072390 11 log.go:181] (0xa0d59d0) (0x9820000) Stream added, broadcasting: 1 I1014 15:00:12.076044 11 log.go:181] (0xa0d59d0) Reply frame received for 1 I1014 15:00:12.076231 11 log.go:181] (0xa0d59d0) (0xa6cad20) Create stream I1014 15:00:12.076315 11 log.go:181] (0xa0d59d0) (0xa6cad20) Stream added, broadcasting: 3 I1014 15:00:12.078630 11 log.go:181] (0xa0d59d0) Reply frame received for 3 I1014 15:00:12.078778 11 log.go:181] (0xa0d59d0) (0xa6cb0a0) Create stream I1014 15:00:12.078860 11 log.go:181] (0xa0d59d0) (0xa6cb0a0) Stream added, broadcasting: 5 I1014 15:00:12.080238 11 log.go:181] (0xa0d59d0) Reply frame received for 5 I1014 15:00:12.128026 11 log.go:181] (0xa0d59d0) Data frame received for 3 I1014 15:00:12.128210 11 log.go:181] (0xa6cad20) (3) Data frame handling I1014 15:00:12.128310 11 log.go:181] (0xa0d59d0) Data frame received for 5 I1014 15:00:12.128420 11 log.go:181] (0xa6cb0a0) (5) Data frame handling I1014 15:00:12.128490 11 log.go:181] (0xa6cad20) (3) Data frame sent I1014 15:00:12.128582 11 log.go:181] (0xa0d59d0) Data frame received for 3 I1014 15:00:12.128637 11 log.go:181] (0xa6cad20) (3) Data frame handling I1014 15:00:12.130094 11 log.go:181] (0xa0d59d0) Data frame received for 1 I1014 15:00:12.130316 11 log.go:181] (0x9820000) (1) Data frame handling I1014 15:00:12.130539 11 log.go:181] (0x9820000) (1) Data frame sent I1014 15:00:12.130782 11 log.go:181] (0xa0d59d0) (0x9820000) Stream removed, broadcasting: 1 I1014 15:00:12.130993 11 log.go:181] (0xa0d59d0) Go away received I1014 15:00:12.131599 11 log.go:181] (0xa0d59d0) (0x9820000) Stream removed, broadcasting: 1 I1014 15:00:12.131738 11 log.go:181] (0xa0d59d0) (0xa6cad20) Stream removed, broadcasting: 3 I1014 15:00:12.131888 11 log.go:181] (0xa0d59d0) (0xa6cb0a0) Stream removed, broadcasting: 5 Oct 14 15:00:12.132: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:00:12.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9718" for this suite. • [SLOW TEST:26.758 seconds] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":208,"skipped":3369,"failed":0} S ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:00:12.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 15:00:12.240: INFO: Creating ReplicaSet my-hostname-basic-2b6d142e-0e1f-4afd-b8d5-c9f559cb50db Oct 14 15:00:12.271: INFO: Pod name my-hostname-basic-2b6d142e-0e1f-4afd-b8d5-c9f559cb50db: Found 0 pods out of 1 Oct 14 15:00:17.315: INFO: Pod name my-hostname-basic-2b6d142e-0e1f-4afd-b8d5-c9f559cb50db: Found 1 pods out of 1 Oct 14 15:00:17.315: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-2b6d142e-0e1f-4afd-b8d5-c9f559cb50db" is running Oct 14 15:00:17.326: INFO: Pod "my-hostname-basic-2b6d142e-0e1f-4afd-b8d5-c9f559cb50db-hpkrf" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-10-14 15:00:12 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-10-14 15:00:15 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-10-14 15:00:15 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-10-14 15:00:12 +0000 UTC Reason: Message:}]) Oct 14 15:00:17.331: INFO: Trying to dial the pod Oct 14 15:00:22.395: INFO: Controller my-hostname-basic-2b6d142e-0e1f-4afd-b8d5-c9f559cb50db: Got expected result from replica 1 [my-hostname-basic-2b6d142e-0e1f-4afd-b8d5-c9f559cb50db-hpkrf]: "my-hostname-basic-2b6d142e-0e1f-4afd-b8d5-c9f559cb50db-hpkrf", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:00:22.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-4866" for this suite. • [SLOW TEST:10.264 seconds] [sig-apps] ReplicaSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":303,"completed":209,"skipped":3370,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:00:22.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium Oct 14 15:00:22.488: INFO: Waiting up to 5m0s for pod "pod-b55103a4-77dd-4888-80cb-b22dcede433a" in namespace "emptydir-5532" to be "Succeeded or Failed" Oct 14 15:00:22.523: INFO: Pod "pod-b55103a4-77dd-4888-80cb-b22dcede433a": Phase="Pending", Reason="", readiness=false. Elapsed: 35.147173ms Oct 14 15:00:24.531: INFO: Pod "pod-b55103a4-77dd-4888-80cb-b22dcede433a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043112106s Oct 14 15:00:26.540: INFO: Pod "pod-b55103a4-77dd-4888-80cb-b22dcede433a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051314135s STEP: Saw pod success Oct 14 15:00:26.540: INFO: Pod "pod-b55103a4-77dd-4888-80cb-b22dcede433a" satisfied condition "Succeeded or Failed" Oct 14 15:00:26.546: INFO: Trying to get logs from node latest-worker pod pod-b55103a4-77dd-4888-80cb-b22dcede433a container test-container: STEP: delete the pod Oct 14 15:00:26.622: INFO: Waiting for pod pod-b55103a4-77dd-4888-80cb-b22dcede433a to disappear Oct 14 15:00:26.628: INFO: Pod pod-b55103a4-77dd-4888-80cb-b22dcede433a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:00:26.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5532" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":210,"skipped":3377,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:00:26.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 14 15:00:37.116: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 14 15:00:39.136: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738284437, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738284437, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738284437, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738284437, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 14 15:00:42.206: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:00:42.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-87" for this suite. STEP: Destroying namespace "webhook-87-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.809 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":303,"completed":211,"skipped":3385,"failed":0} SSSSS ------------------------------ [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:00:42.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod with failed condition STEP: updating the pod Oct 14 15:02:43.191: INFO: Successfully updated pod "var-expansion-823b6d11-4fd8-49ef-be0b-c074349b7724" STEP: waiting for pod running STEP: deleting the pod gracefully Oct 14 15:02:45.240: INFO: Deleting pod "var-expansion-823b6d11-4fd8-49ef-be0b-c074349b7724" in namespace "var-expansion-2171" Oct 14 15:02:45.248: INFO: Wait up to 5m0s for pod "var-expansion-823b6d11-4fd8-49ef-be0b-c074349b7724" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:03:27.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2171" for this suite. • [SLOW TEST:164.822 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":303,"completed":212,"skipped":3390,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:03:27.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should add annotations for pods in rc [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC Oct 14 15:03:27.358: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4774' Oct 14 15:03:29.988: INFO: stderr: "" Oct 14 15:03:29.988: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Oct 14 15:03:30.998: INFO: Selector matched 1 pods for map[app:agnhost] Oct 14 15:03:30.999: INFO: Found 0 / 1 Oct 14 15:03:31.997: INFO: Selector matched 1 pods for map[app:agnhost] Oct 14 15:03:31.997: INFO: Found 0 / 1 Oct 14 15:03:32.997: INFO: Selector matched 1 pods for map[app:agnhost] Oct 14 15:03:32.997: INFO: Found 0 / 1 Oct 14 15:03:33.998: INFO: Selector matched 1 pods for map[app:agnhost] Oct 14 15:03:33.999: INFO: Found 1 / 1 Oct 14 15:03:34.000: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Oct 14 15:03:34.007: INFO: Selector matched 1 pods for map[app:agnhost] Oct 14 15:03:34.008: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Oct 14 15:03:34.009: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config patch pod agnhost-primary-bthtx --namespace=kubectl-4774 -p {"metadata":{"annotations":{"x":"y"}}}' Oct 14 15:03:35.319: INFO: stderr: "" Oct 14 15:03:35.319: INFO: stdout: "pod/agnhost-primary-bthtx patched\n" STEP: checking annotations Oct 14 15:03:35.335: INFO: Selector matched 1 pods for map[app:agnhost] Oct 14 15:03:35.335: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:03:35.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4774" for this suite. • [SLOW TEST:8.070 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1490 should add annotations for pods in rc [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":303,"completed":213,"skipped":3455,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:03:35.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 15:03:35.509: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Oct 14 15:03:35.587: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 15:03:35.613: INFO: Number of nodes with available pods: 0 Oct 14 15:03:35.613: INFO: Node latest-worker is running more than one daemon pod Oct 14 15:03:36.627: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 15:03:36.633: INFO: Number of nodes with available pods: 0 Oct 14 15:03:36.634: INFO: Node latest-worker is running more than one daemon pod Oct 14 15:03:37.635: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 15:03:37.707: INFO: Number of nodes with available pods: 0 Oct 14 15:03:37.707: INFO: Node latest-worker is running more than one daemon pod Oct 14 15:03:38.628: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 15:03:38.637: INFO: Number of nodes with available pods: 0 Oct 14 15:03:38.637: INFO: Node latest-worker is running more than one daemon pod Oct 14 15:03:39.623: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 15:03:39.630: INFO: Number of nodes with available pods: 1 Oct 14 15:03:39.630: INFO: Node latest-worker is running more than one daemon pod Oct 14 15:03:40.650: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 15:03:40.656: INFO: Number of nodes with available pods: 2 Oct 14 15:03:40.656: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Oct 14 15:03:40.906: INFO: Wrong image for pod: daemon-set-4k4tp. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 15:03:40.906: INFO: Wrong image for pod: daemon-set-jh5g7. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 15:03:41.081: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 15:03:42.091: INFO: Wrong image for pod: daemon-set-4k4tp. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 15:03:42.091: INFO: Wrong image for pod: daemon-set-jh5g7. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 15:03:42.103: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 15:03:43.091: INFO: Wrong image for pod: daemon-set-4k4tp. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 15:03:43.091: INFO: Wrong image for pod: daemon-set-jh5g7. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 15:03:43.103: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 15:03:44.092: INFO: Wrong image for pod: daemon-set-4k4tp. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 15:03:44.092: INFO: Wrong image for pod: daemon-set-jh5g7. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 15:03:44.092: INFO: Pod daemon-set-jh5g7 is not available Oct 14 15:03:44.101: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 15:03:45.091: INFO: Wrong image for pod: daemon-set-4k4tp. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 15:03:45.091: INFO: Wrong image for pod: daemon-set-jh5g7. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 15:03:45.091: INFO: Pod daemon-set-jh5g7 is not available Oct 14 15:03:45.101: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 15:03:46.092: INFO: Wrong image for pod: daemon-set-4k4tp. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 15:03:46.092: INFO: Wrong image for pod: daemon-set-jh5g7. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 15:03:46.092: INFO: Pod daemon-set-jh5g7 is not available Oct 14 15:03:46.101: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 15:03:47.096: INFO: Wrong image for pod: daemon-set-4k4tp. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 15:03:47.096: INFO: Wrong image for pod: daemon-set-jh5g7. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 15:03:47.096: INFO: Pod daemon-set-jh5g7 is not available Oct 14 15:03:47.107: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 15:03:48.103: INFO: Wrong image for pod: daemon-set-4k4tp. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 15:03:48.103: INFO: Wrong image for pod: daemon-set-jh5g7. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 15:03:48.103: INFO: Pod daemon-set-jh5g7 is not available Oct 14 15:03:48.113: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 15:03:49.091: INFO: Wrong image for pod: daemon-set-4k4tp. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 15:03:49.091: INFO: Wrong image for pod: daemon-set-jh5g7. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 15:03:49.091: INFO: Pod daemon-set-jh5g7 is not available Oct 14 15:03:49.101: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 15:03:50.109: INFO: Wrong image for pod: daemon-set-4k4tp. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 15:03:50.109: INFO: Wrong image for pod: daemon-set-jh5g7. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 15:03:50.109: INFO: Pod daemon-set-jh5g7 is not available Oct 14 15:03:50.119: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 15:03:51.091: INFO: Wrong image for pod: daemon-set-4k4tp. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 15:03:51.092: INFO: Wrong image for pod: daemon-set-jh5g7. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 15:03:51.092: INFO: Pod daemon-set-jh5g7 is not available Oct 14 15:03:51.102: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 15:03:52.091: INFO: Wrong image for pod: daemon-set-4k4tp. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 15:03:52.091: INFO: Wrong image for pod: daemon-set-jh5g7. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 15:03:52.091: INFO: Pod daemon-set-jh5g7 is not available Oct 14 15:03:52.102: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 15:03:53.092: INFO: Wrong image for pod: daemon-set-4k4tp. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 15:03:53.092: INFO: Wrong image for pod: daemon-set-jh5g7. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 15:03:53.092: INFO: Pod daemon-set-jh5g7 is not available Oct 14 15:03:53.123: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 15:03:54.089: INFO: Wrong image for pod: daemon-set-4k4tp. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 15:03:54.089: INFO: Wrong image for pod: daemon-set-jh5g7. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 15:03:54.089: INFO: Pod daemon-set-jh5g7 is not available Oct 14 15:03:54.098: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 15:03:55.091: INFO: Wrong image for pod: daemon-set-4k4tp. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 15:03:55.091: INFO: Wrong image for pod: daemon-set-jh5g7. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 15:03:55.091: INFO: Pod daemon-set-jh5g7 is not available Oct 14 15:03:55.098: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 15:03:56.093: INFO: Pod daemon-set-2ttwv is not available Oct 14 15:03:56.093: INFO: Wrong image for pod: daemon-set-4k4tp. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 15:03:56.102: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 15:03:57.104: INFO: Pod daemon-set-2ttwv is not available Oct 14 15:03:57.104: INFO: Wrong image for pod: daemon-set-4k4tp. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 15:03:57.113: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 15:03:58.090: INFO: Pod daemon-set-2ttwv is not available Oct 14 15:03:58.090: INFO: Wrong image for pod: daemon-set-4k4tp. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 15:03:58.100: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 15:03:59.092: INFO: Pod daemon-set-2ttwv is not available Oct 14 15:03:59.092: INFO: Wrong image for pod: daemon-set-4k4tp. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 15:03:59.102: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 15:04:00.091: INFO: Wrong image for pod: daemon-set-4k4tp. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 15:04:00.102: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 15:04:01.090: INFO: Wrong image for pod: daemon-set-4k4tp. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 15:04:01.098: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 15:04:02.090: INFO: Wrong image for pod: daemon-set-4k4tp. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 15:04:02.090: INFO: Pod daemon-set-4k4tp is not available Oct 14 15:04:02.099: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 15:04:03.090: INFO: Pod daemon-set-zwqpk is not available Oct 14 15:04:03.100: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Oct 14 15:04:03.109: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 15:04:03.115: INFO: Number of nodes with available pods: 1 Oct 14 15:04:03.115: INFO: Node latest-worker2 is running more than one daemon pod Oct 14 15:04:04.257: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 15:04:04.263: INFO: Number of nodes with available pods: 1 Oct 14 15:04:04.263: INFO: Node latest-worker2 is running more than one daemon pod Oct 14 15:04:05.222: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 15:04:05.522: INFO: Number of nodes with available pods: 1 Oct 14 15:04:05.522: INFO: Node latest-worker2 is running more than one daemon pod Oct 14 15:04:06.127: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 15:04:06.135: INFO: Number of nodes with available pods: 1 Oct 14 15:04:06.135: INFO: Node latest-worker2 is running more than one daemon pod Oct 14 15:04:07.125: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 15:04:07.131: INFO: Number of nodes with available pods: 2 Oct 14 15:04:07.132: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8197, will wait for the garbage collector to delete the pods Oct 14 15:04:07.223: INFO: Deleting DaemonSet.extensions daemon-set took: 9.374223ms Oct 14 15:04:07.624: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.826681ms Oct 14 15:04:10.330: INFO: Number of nodes with available pods: 0 Oct 14 15:04:10.330: INFO: Number of running nodes: 0, number of available pods: 0 Oct 14 15:04:10.334: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8197/daemonsets","resourceVersion":"1152285"},"items":null} Oct 14 15:04:10.338: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8197/pods","resourceVersion":"1152285"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:04:10.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8197" for this suite. • [SLOW TEST:35.018 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":303,"completed":214,"skipped":3476,"failed":0} S ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:04:10.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [sig-storage] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in volume subpath Oct 14 15:04:10.496: INFO: Waiting up to 5m0s for pod "var-expansion-a8ddd859-147c-4d87-a657-d11b38d54450" in namespace "var-expansion-7117" to be "Succeeded or Failed" Oct 14 15:04:10.564: INFO: Pod "var-expansion-a8ddd859-147c-4d87-a657-d11b38d54450": Phase="Pending", Reason="", readiness=false. Elapsed: 68.613449ms Oct 14 15:04:12.573: INFO: Pod "var-expansion-a8ddd859-147c-4d87-a657-d11b38d54450": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077581699s Oct 14 15:04:14.583: INFO: Pod "var-expansion-a8ddd859-147c-4d87-a657-d11b38d54450": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.086670669s STEP: Saw pod success Oct 14 15:04:14.583: INFO: Pod "var-expansion-a8ddd859-147c-4d87-a657-d11b38d54450" satisfied condition "Succeeded or Failed" Oct 14 15:04:14.588: INFO: Trying to get logs from node latest-worker pod var-expansion-a8ddd859-147c-4d87-a657-d11b38d54450 container dapi-container: STEP: delete the pod Oct 14 15:04:14.640: INFO: Waiting for pod var-expansion-a8ddd859-147c-4d87-a657-d11b38d54450 to disappear Oct 14 15:04:14.654: INFO: Pod var-expansion-a8ddd859-147c-4d87-a657-d11b38d54450 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:04:14.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7117" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":303,"completed":215,"skipped":3477,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:04:14.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:04:18.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-125" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":303,"completed":216,"skipped":3514,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:04:18.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium Oct 14 15:04:19.007: INFO: Waiting up to 5m0s for pod "pod-c0061d82-76d0-4687-bc59-01b2898afc03" in namespace "emptydir-7866" to be "Succeeded or Failed" Oct 14 15:04:19.050: INFO: Pod "pod-c0061d82-76d0-4687-bc59-01b2898afc03": Phase="Pending", Reason="", readiness=false. Elapsed: 42.585581ms Oct 14 15:04:21.056: INFO: Pod "pod-c0061d82-76d0-4687-bc59-01b2898afc03": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048518258s Oct 14 15:04:23.064: INFO: Pod "pod-c0061d82-76d0-4687-bc59-01b2898afc03": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056979702s STEP: Saw pod success Oct 14 15:04:23.065: INFO: Pod "pod-c0061d82-76d0-4687-bc59-01b2898afc03" satisfied condition "Succeeded or Failed" Oct 14 15:04:23.070: INFO: Trying to get logs from node latest-worker pod pod-c0061d82-76d0-4687-bc59-01b2898afc03 container test-container: STEP: delete the pod Oct 14 15:04:23.115: INFO: Waiting for pod pod-c0061d82-76d0-4687-bc59-01b2898afc03 to disappear Oct 14 15:04:23.162: INFO: Pod pod-c0061d82-76d0-4687-bc59-01b2898afc03 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:04:23.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7866" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":217,"skipped":3550,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:04:23.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-9770 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet Oct 14 15:04:23.293: INFO: Found 0 stateful pods, waiting for 3 Oct 14 15:04:33.331: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Oct 14 15:04:33.331: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Oct 14 15:04:33.331: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Oct 14 15:04:43.302: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Oct 14 15:04:43.303: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Oct 14 15:04:43.303: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Oct 14 15:04:43.342: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Oct 14 15:04:53.451: INFO: Updating stateful set ss2 Oct 14 15:04:53.482: INFO: Waiting for Pod statefulset-9770/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Oct 14 15:05:04.159: INFO: Found 2 stateful pods, waiting for 3 Oct 14 15:05:14.170: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Oct 14 15:05:14.170: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Oct 14 15:05:14.171: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Oct 14 15:05:14.204: INFO: Updating stateful set ss2 Oct 14 15:05:14.275: INFO: Waiting for Pod statefulset-9770/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Oct 14 15:05:24.292: INFO: Waiting for Pod statefulset-9770/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Oct 14 15:05:34.317: INFO: Updating stateful set ss2 Oct 14 15:05:34.410: INFO: Waiting for StatefulSet statefulset-9770/ss2 to complete update Oct 14 15:05:34.410: INFO: Waiting for Pod statefulset-9770/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Oct 14 15:05:44.427: INFO: Waiting for StatefulSet statefulset-9770/ss2 to complete update Oct 14 15:05:44.428: INFO: Waiting for Pod statefulset-9770/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Oct 14 15:05:54.448: INFO: Deleting all statefulset in ns statefulset-9770 Oct 14 15:05:54.454: INFO: Scaling statefulset ss2 to 0 Oct 14 15:06:34.493: INFO: Waiting for statefulset status.replicas updated to 0 Oct 14 15:06:34.497: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:06:34.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9770" for this suite. • [SLOW TEST:131.349 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":303,"completed":218,"skipped":3561,"failed":0} SSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:06:34.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-2664 STEP: creating service affinity-nodeport-transition in namespace services-2664 STEP: creating replication controller affinity-nodeport-transition in namespace services-2664 I1014 15:06:34.758702 11 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-2664, replica count: 3 I1014 15:06:37.810195 11 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1014 15:06:40.811069 11 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 14 15:06:40.835: INFO: Creating new exec pod Oct 14 15:06:45.942: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-2664 execpod-affinity2k5v2 -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80' Oct 14 15:06:50.767: INFO: stderr: "I1014 15:06:50.670674 3017 log.go:181] (0x2954000) (0x2954070) Create stream\nI1014 15:06:50.674533 3017 log.go:181] (0x2954000) (0x2954070) Stream added, broadcasting: 1\nI1014 15:06:50.686408 3017 log.go:181] (0x2954000) Reply frame received for 1\nI1014 15:06:50.686871 3017 log.go:181] (0x2954000) (0x2cfc070) Create stream\nI1014 15:06:50.686942 3017 log.go:181] (0x2954000) (0x2cfc070) Stream added, broadcasting: 3\nI1014 15:06:50.688646 3017 log.go:181] (0x2954000) Reply frame received for 3\nI1014 15:06:50.689126 3017 log.go:181] (0x2954000) (0x27d8000) Create stream\nI1014 15:06:50.689210 3017 log.go:181] (0x2954000) (0x27d8000) Stream added, broadcasting: 5\nI1014 15:06:50.690495 3017 log.go:181] (0x2954000) Reply frame received for 5\nI1014 15:06:50.749990 3017 log.go:181] (0x2954000) Data frame received for 5\nI1014 15:06:50.750212 3017 log.go:181] (0x27d8000) (5) Data frame handling\nI1014 15:06:50.750341 3017 log.go:181] (0x2954000) Data frame received for 3\nI1014 15:06:50.750514 3017 log.go:181] (0x27d8000) (5) Data frame sent\nI1014 15:06:50.750671 3017 log.go:181] (0x2954000) Data frame received for 5\nI1014 15:06:50.750738 3017 log.go:181] (0x27d8000) (5) Data frame handling\nI1014 15:06:50.750866 3017 log.go:181] (0x2cfc070) (3) Data frame handling\nI1014 15:06:50.751418 3017 log.go:181] (0x2954000) Data frame received for 1\nI1014 15:06:50.751540 3017 log.go:181] (0x2954070) (1) Data frame handling\n+ nc -zv -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\nI1014 15:06:50.751694 3017 log.go:181] (0x2954070) (1) Data frame sent\nI1014 15:06:50.753654 3017 log.go:181] (0x2954000) (0x2954070) Stream removed, broadcasting: 1\nI1014 15:06:50.758611 3017 log.go:181] (0x2954000) (0x2954070) Stream removed, broadcasting: 1\nI1014 15:06:50.758861 3017 log.go:181] (0x2954000) (0x2cfc070) Stream removed, broadcasting: 3\nI1014 15:06:50.759054 3017 log.go:181] (0x2954000) (0x27d8000) Stream removed, broadcasting: 5\n" Oct 14 15:06:50.768: INFO: stdout: "" Oct 14 15:06:50.772: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-2664 execpod-affinity2k5v2 -- /bin/sh -x -c nc -zv -t -w 2 10.110.43.80 80' Oct 14 15:06:52.283: INFO: stderr: "I1014 15:06:52.159521 3037 log.go:181] (0x2ea80e0) (0x2ea8150) Create stream\nI1014 15:06:52.161582 3037 log.go:181] (0x2ea80e0) (0x2ea8150) Stream added, broadcasting: 1\nI1014 15:06:52.174608 3037 log.go:181] (0x2ea80e0) Reply frame received for 1\nI1014 15:06:52.175512 3037 log.go:181] (0x2ea80e0) (0x2a5c070) Create stream\nI1014 15:06:52.175615 3037 log.go:181] (0x2ea80e0) (0x2a5c070) Stream added, broadcasting: 3\nI1014 15:06:52.177889 3037 log.go:181] (0x2ea80e0) Reply frame received for 3\nI1014 15:06:52.178474 3037 log.go:181] (0x2ea80e0) (0x2ea8310) Create stream\nI1014 15:06:52.178606 3037 log.go:181] (0x2ea80e0) (0x2ea8310) Stream added, broadcasting: 5\nI1014 15:06:52.180297 3037 log.go:181] (0x2ea80e0) Reply frame received for 5\nI1014 15:06:52.264017 3037 log.go:181] (0x2ea80e0) Data frame received for 5\nI1014 15:06:52.264395 3037 log.go:181] (0x2ea80e0) Data frame received for 1\nI1014 15:06:52.264698 3037 log.go:181] (0x2ea80e0) Data frame received for 3\nI1014 15:06:52.265002 3037 log.go:181] (0x2a5c070) (3) Data frame handling\nI1014 15:06:52.265171 3037 log.go:181] (0x2ea8150) (1) Data frame handling\nI1014 15:06:52.265501 3037 log.go:181] (0x2ea8310) (5) Data frame handling\nI1014 15:06:52.265949 3037 log.go:181] (0x2ea8150) (1) Data frame sent\nI1014 15:06:52.266397 3037 log.go:181] (0x2ea8310) (5) Data frame sent\nI1014 15:06:52.266521 3037 log.go:181] (0x2ea80e0) Data frame received for 5\nI1014 15:06:52.266621 3037 log.go:181] (0x2ea8310) (5) Data frame handling\n+ nc -zv -t -w 2 10.110.43.80 80\nConnection to 10.110.43.80 80 port [tcp/http] succeeded!\nI1014 15:06:52.269224 3037 log.go:181] (0x2ea80e0) (0x2ea8150) Stream removed, broadcasting: 1\nI1014 15:06:52.270357 3037 log.go:181] (0x2ea80e0) Go away received\nI1014 15:06:52.274078 3037 log.go:181] (0x2ea80e0) (0x2ea8150) Stream removed, broadcasting: 1\nI1014 15:06:52.274346 3037 log.go:181] (0x2ea80e0) (0x2a5c070) Stream removed, broadcasting: 3\nI1014 15:06:52.274571 3037 log.go:181] (0x2ea80e0) (0x2ea8310) Stream removed, broadcasting: 5\n" Oct 14 15:06:52.283: INFO: stdout: "" Oct 14 15:06:52.284: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-2664 execpod-affinity2k5v2 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.15 31827' Oct 14 15:06:53.918: INFO: stderr: "I1014 15:06:53.782968 3057 log.go:181] (0x24da000) (0x24da070) Create stream\nI1014 15:06:53.785942 3057 log.go:181] (0x24da000) (0x24da070) Stream added, broadcasting: 1\nI1014 15:06:53.797715 3057 log.go:181] (0x24da000) Reply frame received for 1\nI1014 15:06:53.798721 3057 log.go:181] (0x24da000) (0x24da2a0) Create stream\nI1014 15:06:53.798833 3057 log.go:181] (0x24da000) (0x24da2a0) Stream added, broadcasting: 3\nI1014 15:06:53.801003 3057 log.go:181] (0x24da000) Reply frame received for 3\nI1014 15:06:53.801633 3057 log.go:181] (0x24da000) (0x24da4d0) Create stream\nI1014 15:06:53.801795 3057 log.go:181] (0x24da000) (0x24da4d0) Stream added, broadcasting: 5\nI1014 15:06:53.803673 3057 log.go:181] (0x24da000) Reply frame received for 5\nI1014 15:06:53.898587 3057 log.go:181] (0x24da000) Data frame received for 5\nI1014 15:06:53.898968 3057 log.go:181] (0x24da000) Data frame received for 3\nI1014 15:06:53.899284 3057 log.go:181] (0x24da2a0) (3) Data frame handling\nI1014 15:06:53.899685 3057 log.go:181] (0x24da000) Data frame received for 1\nI1014 15:06:53.899797 3057 log.go:181] (0x24da070) (1) Data frame handling\nI1014 15:06:53.900003 3057 log.go:181] (0x24da4d0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.15 31827\nConnection to 172.18.0.15 31827 port [tcp/31827] succeeded!\nI1014 15:06:53.903106 3057 log.go:181] (0x24da070) (1) Data frame sent\nI1014 15:06:53.903254 3057 log.go:181] (0x24da4d0) (5) Data frame sent\nI1014 15:06:53.903496 3057 log.go:181] (0x24da000) Data frame received for 5\nI1014 15:06:53.903606 3057 log.go:181] (0x24da4d0) (5) Data frame handling\nI1014 15:06:53.904412 3057 log.go:181] (0x24da000) (0x24da070) Stream removed, broadcasting: 1\nI1014 15:06:53.906877 3057 log.go:181] (0x24da000) Go away received\nI1014 15:06:53.910208 3057 log.go:181] (0x24da000) (0x24da070) Stream removed, broadcasting: 1\nI1014 15:06:53.910408 3057 log.go:181] (0x24da000) (0x24da2a0) Stream removed, broadcasting: 3\nI1014 15:06:53.910584 3057 log.go:181] (0x24da000) (0x24da4d0) Stream removed, broadcasting: 5\n" Oct 14 15:06:53.919: INFO: stdout: "" Oct 14 15:06:53.920: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-2664 execpod-affinity2k5v2 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 31827' Oct 14 15:06:55.562: INFO: stderr: "I1014 15:06:55.450615 3077 log.go:181] (0x2ab0000) (0x2ab0070) Create stream\nI1014 15:06:55.453017 3077 log.go:181] (0x2ab0000) (0x2ab0070) Stream added, broadcasting: 1\nI1014 15:06:55.462674 3077 log.go:181] (0x2ab0000) Reply frame received for 1\nI1014 15:06:55.463105 3077 log.go:181] (0x2ab0000) (0x24ba000) Create stream\nI1014 15:06:55.463160 3077 log.go:181] (0x2ab0000) (0x24ba000) Stream added, broadcasting: 3\nI1014 15:06:55.464429 3077 log.go:181] (0x2ab0000) Reply frame received for 3\nI1014 15:06:55.464614 3077 log.go:181] (0x2ab0000) (0x24ba1c0) Create stream\nI1014 15:06:55.464671 3077 log.go:181] (0x2ab0000) (0x24ba1c0) Stream added, broadcasting: 5\nI1014 15:06:55.465983 3077 log.go:181] (0x2ab0000) Reply frame received for 5\nI1014 15:06:55.544307 3077 log.go:181] (0x2ab0000) Data frame received for 3\nI1014 15:06:55.544630 3077 log.go:181] (0x24ba000) (3) Data frame handling\nI1014 15:06:55.545033 3077 log.go:181] (0x2ab0000) Data frame received for 5\nI1014 15:06:55.545280 3077 log.go:181] (0x24ba1c0) (5) Data frame handling\nI1014 15:06:55.545815 3077 log.go:181] (0x2ab0000) Data frame received for 1\nI1014 15:06:55.546036 3077 log.go:181] (0x2ab0070) (1) Data frame handling\nI1014 15:06:55.547282 3077 log.go:181] (0x24ba1c0) (5) Data frame sent\nI1014 15:06:55.547635 3077 log.go:181] (0x2ab0070) (1) Data frame sent\nI1014 15:06:55.547754 3077 log.go:181] (0x2ab0000) Data frame received for 5\n+ nc -zv -t -w 2 172.18.0.14 31827\nConnection to 172.18.0.14 31827 port [tcp/31827] succeeded!\nI1014 15:06:55.547867 3077 log.go:181] (0x24ba1c0) (5) Data frame handling\nI1014 15:06:55.549201 3077 log.go:181] (0x2ab0000) (0x2ab0070) Stream removed, broadcasting: 1\nI1014 15:06:55.550955 3077 log.go:181] (0x2ab0000) Go away received\nI1014 15:06:55.552969 3077 log.go:181] (0x2ab0000) (0x2ab0070) Stream removed, broadcasting: 1\nI1014 15:06:55.553567 3077 log.go:181] (0x2ab0000) (0x24ba000) Stream removed, broadcasting: 3\nI1014 15:06:55.554007 3077 log.go:181] (0x2ab0000) (0x24ba1c0) Stream removed, broadcasting: 5\n" Oct 14 15:06:55.563: INFO: stdout: "" Oct 14 15:06:55.576: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-2664 execpod-affinity2k5v2 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.15:31827/ ; done' Oct 14 15:06:57.189: INFO: stderr: "I1014 15:06:56.983097 3097 log.go:181] (0x2e2a000) (0x2e2a0e0) Create stream\nI1014 15:06:56.987639 3097 log.go:181] (0x2e2a000) (0x2e2a0e0) Stream added, broadcasting: 1\nI1014 15:06:56.998114 3097 log.go:181] (0x2e2a000) Reply frame received for 1\nI1014 15:06:56.998554 3097 log.go:181] (0x2e2a000) (0x2969730) Create stream\nI1014 15:06:56.998612 3097 log.go:181] (0x2e2a000) (0x2969730) Stream added, broadcasting: 3\nI1014 15:06:57.000198 3097 log.go:181] (0x2e2a000) Reply frame received for 3\nI1014 15:06:57.000616 3097 log.go:181] (0x2e2a000) (0x2e2a2a0) Create stream\nI1014 15:06:57.000717 3097 log.go:181] (0x2e2a000) (0x2e2a2a0) Stream added, broadcasting: 5\nI1014 15:06:57.002219 3097 log.go:181] (0x2e2a000) Reply frame received for 5\nI1014 15:06:57.078780 3097 log.go:181] (0x2e2a000) Data frame received for 5\nI1014 15:06:57.079192 3097 log.go:181] (0x2e2a000) Data frame received for 3\nI1014 15:06:57.079446 3097 log.go:181] (0x2969730) (3) Data frame handling\nI1014 15:06:57.079674 3097 log.go:181] (0x2e2a2a0) (5) Data frame handling\nI1014 15:06:57.081027 3097 log.go:181] (0x2969730) (3) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31827/\nI1014 15:06:57.081727 3097 log.go:181] (0x2e2a2a0) (5) Data frame sent\nI1014 15:06:57.081955 3097 log.go:181] (0x2e2a000) Data frame received for 3\nI1014 15:06:57.082113 3097 log.go:181] (0x2969730) (3) Data frame handling\nI1014 15:06:57.082281 3097 log.go:181] (0x2969730) (3) Data frame sent\nI1014 15:06:57.082476 3097 log.go:181] (0x2e2a000) Data frame received for 5\nI1014 15:06:57.082663 3097 log.go:181] (0x2e2a2a0) (5) Data frame handling\n+ echo\n+ curl -q -sI1014 15:06:57.082851 3097 log.go:181] (0x2e2a000) Data frame received for 3\nI1014 15:06:57.082986 3097 log.go:181] (0x2969730) (3) Data frame handling\nI1014 15:06:57.083114 3097 log.go:181] (0x2e2a2a0) (5) Data frame sent\nI1014 15:06:57.083309 3097 log.go:181] (0x2e2a000) Data frame received for 5\nI1014 15:06:57.083465 3097 log.go:181] (0x2969730) (3) Data frame sent\nI1014 15:06:57.083631 3097 log.go:181] (0x2e2a2a0) (5) Data frame handling\nI1014 15:06:57.083787 3097 log.go:181] (0x2e2a2a0) (5) Data frame sent\n --connect-timeout 2 http://172.18.0.15:31827/\nI1014 15:06:57.089761 3097 log.go:181] (0x2e2a000) Data frame received for 3\nI1014 15:06:57.089862 3097 log.go:181] (0x2969730) (3) Data frame handling\nI1014 15:06:57.089981 3097 log.go:181] (0x2969730) (3) Data frame sent\nI1014 15:06:57.090553 3097 log.go:181] (0x2e2a000) Data frame received for 5\nI1014 15:06:57.090662 3097 log.go:181] (0x2e2a2a0) (5) Data frame handling\nI1014 15:06:57.090768 3097 log.go:181] (0x2e2a2a0) (5) Data frame sent\n+ echo\nI1014 15:06:57.090883 3097 log.go:181] (0x2e2a000) Data frame received for 5\nI1014 15:06:57.090964 3097 log.go:181] (0x2e2a2a0) (5) Data frame handling\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31827/\nI1014 15:06:57.091123 3097 log.go:181] (0x2e2a000) Data frame received for 3\nI1014 15:06:57.091274 3097 log.go:181] (0x2969730) (3) Data frame handling\nI1014 15:06:57.091358 3097 log.go:181] (0x2e2a2a0) (5) Data frame sent\nI1014 15:06:57.091456 3097 log.go:181] (0x2969730) (3) Data frame sent\nI1014 15:06:57.095099 3097 log.go:181] (0x2e2a000) Data frame received for 3\nI1014 15:06:57.095221 3097 log.go:181] (0x2969730) (3) Data frame handling\nI1014 15:06:57.095373 3097 log.go:181] (0x2969730) (3) Data frame sent\nI1014 15:06:57.095747 3097 log.go:181] (0x2e2a000) Data frame received for 5\nI1014 15:06:57.095889 3097 log.go:181] (0x2e2a2a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeoutI1014 15:06:57.096069 3097 log.go:181] (0x2e2a000) Data frame received for 3\nI1014 15:06:57.096283 3097 log.go:181] (0x2969730) (3) Data frame handling\nI1014 15:06:57.096484 3097 log.go:181] (0x2e2a2a0) (5) Data frame sent\nI1014 15:06:57.096658 3097 log.go:181] (0x2e2a000) Data frame received for 5\nI1014 15:06:57.096765 3097 log.go:181] (0x2e2a2a0) (5) Data frame handling\n 2 http://172.18.0.15:31827/\nI1014 15:06:57.096950 3097 log.go:181] (0x2969730) (3) Data frame sent\nI1014 15:06:57.097211 3097 log.go:181] (0x2e2a2a0) (5) Data frame sent\nI1014 15:06:57.099005 3097 log.go:181] (0x2e2a000) Data frame received for 3\nI1014 15:06:57.099117 3097 log.go:181] (0x2969730) (3) Data frame handling\nI1014 15:06:57.099226 3097 log.go:181] (0x2969730) (3) Data frame sent\nI1014 15:06:57.099625 3097 log.go:181] (0x2e2a000) Data frame received for 3\nI1014 15:06:57.099744 3097 log.go:181] (0x2969730) (3) Data frame handling\nI1014 15:06:57.099866 3097 log.go:181] (0x2e2a000) Data frame received for 5\nI1014 15:06:57.100009 3097 log.go:181] (0x2e2a2a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31827/\nI1014 15:06:57.100151 3097 log.go:181] (0x2969730) (3) Data frame sent\nI1014 15:06:57.100544 3097 log.go:181] (0x2e2a2a0) (5) Data frame sent\nI1014 15:06:57.105294 3097 log.go:181] (0x2e2a000) Data frame received for 3\nI1014 15:06:57.105433 3097 log.go:181] (0x2969730) (3) Data frame handling\nI1014 15:06:57.105566 3097 log.go:181] (0x2969730) (3) Data frame sent\nI1014 15:06:57.106060 3097 log.go:181] (0x2e2a000) Data frame received for 5\nI1014 15:06:57.106195 3097 log.go:181] (0x2e2a2a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31827/\nI1014 15:06:57.106330 3097 log.go:181] (0x2e2a000) Data frame received for 3\nI1014 15:06:57.106522 3097 log.go:181] (0x2969730) (3) Data frame handling\nI1014 15:06:57.106678 3097 log.go:181] (0x2e2a2a0) (5) Data frame sent\nI1014 15:06:57.106826 3097 log.go:181] (0x2969730) (3) Data frame sent\nI1014 15:06:57.111261 3097 log.go:181] (0x2e2a000) Data frame received for 3\nI1014 15:06:57.111410 3097 log.go:181] (0x2969730) (3) Data frame handling\nI1014 15:06:57.111542 3097 log.go:181] (0x2969730) (3) Data frame sent\nI1014 15:06:57.111749 3097 log.go:181] (0x2e2a000) Data frame received for 5\nI1014 15:06:57.111822 3097 log.go:181] (0x2e2a2a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31827/I1014 15:06:57.111929 3097 log.go:181] (0x2e2a000) Data frame received for 3\nI1014 15:06:57.112025 3097 log.go:181] (0x2969730) (3) Data frame handling\nI1014 15:06:57.112092 3097 log.go:181] (0x2969730) (3) Data frame sent\nI1014 15:06:57.112198 3097 log.go:181] (0x2e2a2a0) (5) Data frame sent\nI1014 15:06:57.112391 3097 log.go:181] (0x2e2a000) Data frame received for 5\nI1014 15:06:57.112537 3097 log.go:181] (0x2e2a2a0) (5) Data frame handling\nI1014 15:06:57.112670 3097 log.go:181] (0x2e2a2a0) (5) Data frame sent\n\nI1014 15:06:57.117195 3097 log.go:181] (0x2e2a000) Data frame received for 3\nI1014 15:06:57.117326 3097 log.go:181] (0x2969730) (3) Data frame handling\nI1014 15:06:57.117445 3097 log.go:181] (0x2969730) (3) Data frame sent\nI1014 15:06:57.117869 3097 log.go:181] (0x2e2a000) Data frame received for 5\nI1014 15:06:57.118018 3097 log.go:181] (0x2e2a000) Data frame received for 3\nI1014 15:06:57.118195 3097 log.go:181] (0x2969730) (3) Data frame handling\nI1014 15:06:57.118294 3097 log.go:181] (0x2e2a2a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31827/\nI1014 15:06:57.118464 3097 log.go:181] (0x2969730) (3) Data frame sent\nI1014 15:06:57.118635 3097 log.go:181] (0x2e2a2a0) (5) Data frame sent\nI1014 15:06:57.120759 3097 log.go:181] (0x2e2a000) Data frame received for 3\nI1014 15:06:57.121076 3097 log.go:181] (0x2969730) (3) Data frame handling\nI1014 15:06:57.121248 3097 log.go:181] (0x2969730) (3) Data frame sent\nI1014 15:06:57.121420 3097 log.go:181] (0x2e2a000) Data frame received for 3\nI1014 15:06:57.121585 3097 log.go:181] (0x2969730) (3) Data frame handling\nI1014 15:06:57.121685 3097 log.go:181] (0x2e2a000) Data frame received for 5\nI1014 15:06:57.121816 3097 log.go:181] (0x2e2a2a0) (5) Data frame handling\nI1014 15:06:57.121927 3097 log.go:181] (0x2e2a2a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31827/\nI1014 15:06:57.122032 3097 log.go:181] (0x2969730) (3) Data frame sent\nI1014 15:06:57.126526 3097 log.go:181] (0x2e2a000) Data frame received for 3\nI1014 15:06:57.126638 3097 log.go:181] (0x2969730) (3) Data frame handling\nI1014 15:06:57.126787 3097 log.go:181] (0x2969730) (3) Data frame sent\nI1014 15:06:57.127410 3097 log.go:181] (0x2e2a000) Data frame received for 5\nI1014 15:06:57.127550 3097 log.go:181] (0x2e2a2a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31827/I1014 15:06:57.127720 3097 log.go:181] (0x2e2a000) Data frame received for 3\nI1014 15:06:57.127910 3097 log.go:181] (0x2969730) (3) Data frame handling\nI1014 15:06:57.128046 3097 log.go:181] (0x2e2a2a0) (5) Data frame sent\nI1014 15:06:57.128214 3097 log.go:181] (0x2e2a000) Data frame received for 5\nI1014 15:06:57.128375 3097 log.go:181] (0x2e2a2a0) (5) Data frame handling\n\nI1014 15:06:57.128537 3097 log.go:181] (0x2969730) (3) Data frame sent\nI1014 15:06:57.128669 3097 log.go:181] (0x2e2a2a0) (5) Data frame sent\nI1014 15:06:57.133557 3097 log.go:181] (0x2e2a000) Data frame received for 3\nI1014 15:06:57.133661 3097 log.go:181] (0x2969730) (3) Data frame handling\nI1014 15:06:57.133781 3097 log.go:181] (0x2e2a000) Data frame received for 5\nI1014 15:06:57.133883 3097 log.go:181] (0x2e2a2a0) (5) Data frame handling\nI1014 15:06:57.133996 3097 log.go:181] (0x2e2a2a0) (5) Data frame sent\n+ echo\n+ curlI1014 15:06:57.134105 3097 log.go:181] (0x2e2a000) Data frame received for 5\nI1014 15:06:57.134200 3097 log.go:181] (0x2969730) (3) Data frame sent\nI1014 15:06:57.134329 3097 log.go:181] (0x2e2a000) Data frame received for 3\nI1014 15:06:57.134406 3097 log.go:181] (0x2969730) (3) Data frame handling\nI1014 15:06:57.134493 3097 log.go:181] (0x2969730) (3) Data frame sent\nI1014 15:06:57.134592 3097 log.go:181] (0x2e2a2a0) (5) Data frame handling\n -q -s --connect-timeout 2 http://172.18.0.15:31827/\nI1014 15:06:57.134788 3097 log.go:181] (0x2e2a2a0) (5) Data frame sent\nI1014 15:06:57.137554 3097 log.go:181] (0x2e2a000) Data frame received for 3\nI1014 15:06:57.137679 3097 log.go:181] (0x2969730) (3) Data frame handling\nI1014 15:06:57.137794 3097 log.go:181] (0x2969730) (3) Data frame sent\nI1014 15:06:57.138103 3097 log.go:181] (0x2e2a000) Data frame received for 5\nI1014 15:06:57.138247 3097 log.go:181] (0x2e2a2a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31827/\nI1014 15:06:57.138367 3097 log.go:181] (0x2e2a000) Data frame received for 3\nI1014 15:06:57.138508 3097 log.go:181] (0x2969730) (3) Data frame handling\nI1014 15:06:57.138635 3097 log.go:181] (0x2e2a2a0) (5) Data frame sent\nI1014 15:06:57.138766 3097 log.go:181] (0x2969730) (3) Data frame sent\nI1014 15:06:57.143907 3097 log.go:181] (0x2e2a000) Data frame received for 3\nI1014 15:06:57.144052 3097 log.go:181] (0x2969730) (3) Data frame handling\nI1014 15:06:57.144208 3097 log.go:181] (0x2969730) (3) Data frame sent\nI1014 15:06:57.144532 3097 log.go:181] (0x2e2a000) Data frame received for 5\nI1014 15:06:57.144647 3097 log.go:181] (0x2e2a2a0) (5) Data frame handling\nI1014 15:06:57.144754 3097 log.go:181] (0x2e2a2a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2I1014 15:06:57.145063 3097 log.go:181] (0x2e2a000) Data frame received for 5\nI1014 15:06:57.145206 3097 log.go:181] (0x2e2a2a0) (5) Data frame handling\n http://172.18.0.15:31827/\nI1014 15:06:57.145349 3097 log.go:181] (0x2e2a000) Data frame received for 3\nI1014 15:06:57.145538 3097 log.go:181] (0x2969730) (3) Data frame handling\nI1014 15:06:57.145729 3097 log.go:181] (0x2e2a2a0) (5) Data frame sent\nI1014 15:06:57.145900 3097 log.go:181] (0x2969730) (3) Data frame sent\nI1014 15:06:57.149499 3097 log.go:181] (0x2e2a000) Data frame received for 3\nI1014 15:06:57.149628 3097 log.go:181] (0x2969730) (3) Data frame handling\nI1014 15:06:57.149773 3097 log.go:181] (0x2969730) (3) Data frame sent\nI1014 15:06:57.149938 3097 log.go:181] (0x2e2a000) Data frame received for 5\nI1014 15:06:57.150033 3097 log.go:181] (0x2e2a2a0) (5) Data frame handling\nI1014 15:06:57.150129 3097 log.go:181] (0x2e2a2a0) (5) Data frame sent\nI1014 15:06:57.150205 3097 log.go:181] (0x2e2a000) Data frame received for 5\n+ echo\n+ curl -qI1014 15:06:57.150283 3097 log.go:181] (0x2e2a2a0) (5) Data frame handling\nI1014 15:06:57.150365 3097 log.go:181] (0x2e2a2a0) (5) Data frame sent\n -s --connect-timeout 2 http://172.18.0.15:31827/\nI1014 15:06:57.150439 3097 log.go:181] (0x2e2a000) Data frame received for 3\nI1014 15:06:57.150631 3097 log.go:181] (0x2969730) (3) Data frame handling\nI1014 15:06:57.150821 3097 log.go:181] (0x2969730) (3) Data frame sent\nI1014 15:06:57.154143 3097 log.go:181] (0x2e2a000) Data frame received for 3\nI1014 15:06:57.154288 3097 log.go:181] (0x2969730) (3) Data frame handling\nI1014 15:06:57.154427 3097 log.go:181] (0x2969730) (3) Data frame sent\nI1014 15:06:57.155029 3097 log.go:181] (0x2e2a000) Data frame received for 5\nI1014 15:06:57.155194 3097 log.go:181] (0x2e2a2a0) (5) Data frame handling\nI1014 15:06:57.155344 3097 log.go:181] (0x2e2a2a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31827/\nI1014 15:06:57.155471 3097 log.go:181] (0x2e2a000) Data frame received for 3\nI1014 15:06:57.155591 3097 log.go:181] (0x2969730) (3) Data frame handling\nI1014 15:06:57.155739 3097 log.go:181] (0x2969730) (3) Data frame sent\nI1014 15:06:57.160798 3097 log.go:181] (0x2e2a000) Data frame received for 3\nI1014 15:06:57.161073 3097 log.go:181] (0x2969730) (3) Data frame handling\nI1014 15:06:57.161200 3097 log.go:181] (0x2969730) (3) Data frame sent\nI1014 15:06:57.161667 3097 log.go:181] (0x2e2a000) Data frame received for 5\nI1014 15:06:57.161796 3097 log.go:181] (0x2e2a2a0) (5) Data frame handling\n+ echo\n+ curl -qI1014 15:06:57.161900 3097 log.go:181] (0x2e2a000) Data frame received for 3\nI1014 15:06:57.162083 3097 log.go:181] (0x2969730) (3) Data frame handling\nI1014 15:06:57.162234 3097 log.go:181] (0x2e2a2a0) (5) Data frame sent\nI1014 15:06:57.162358 3097 log.go:181] (0x2e2a000) Data frame received for 5\nI1014 15:06:57.162471 3097 log.go:181] (0x2e2a2a0) (5) Data frame handling\nI1014 15:06:57.162609 3097 log.go:181] (0x2e2a2a0) (5) Data frame sent\n -s --connect-timeout 2 http://172.18.0.15:31827/\nI1014 15:06:57.162792 3097 log.go:181] (0x2969730) (3) Data frame sent\nI1014 15:06:57.167505 3097 log.go:181] (0x2e2a000) Data frame received for 3\nI1014 15:06:57.167681 3097 log.go:181] (0x2969730) (3) Data frame handling\nI1014 15:06:57.167845 3097 log.go:181] (0x2969730) (3) Data frame sent\nI1014 15:06:57.168403 3097 log.go:181] (0x2e2a000) Data frame received for 5\nI1014 15:06:57.168516 3097 log.go:181] (0x2e2a2a0) (5) Data frame handling\nI1014 15:06:57.168707 3097 log.go:181] (0x2e2a000) Data frame received for 3\nI1014 15:06:57.168812 3097 log.go:181] (0x2969730) (3) Data frame handling\nI1014 15:06:57.174348 3097 log.go:181] (0x2e2a000) Data frame received for 1\nI1014 15:06:57.174454 3097 log.go:181] (0x2e2a0e0) (1) Data frame handling\nI1014 15:06:57.174570 3097 log.go:181] (0x2e2a0e0) (1) Data frame sent\nI1014 15:06:57.175362 3097 log.go:181] (0x2e2a000) (0x2e2a0e0) Stream removed, broadcasting: 1\nI1014 15:06:57.177404 3097 log.go:181] (0x2e2a000) Go away received\nI1014 15:06:57.180039 3097 log.go:181] (0x2e2a000) (0x2e2a0e0) Stream removed, broadcasting: 1\nI1014 15:06:57.180653 3097 log.go:181] (0x2e2a000) (0x2969730) Stream removed, broadcasting: 3\nI1014 15:06:57.181048 3097 log.go:181] (0x2e2a000) (0x2e2a2a0) Stream removed, broadcasting: 5\n" Oct 14 15:06:57.195: INFO: stdout: "\naffinity-nodeport-transition-fjqxq\naffinity-nodeport-transition-jkg7c\naffinity-nodeport-transition-fjqxq\naffinity-nodeport-transition-fjqxq\naffinity-nodeport-transition-jkg7c\naffinity-nodeport-transition-fjqxq\naffinity-nodeport-transition-vfcjw\naffinity-nodeport-transition-fjqxq\naffinity-nodeport-transition-fjqxq\naffinity-nodeport-transition-vfcjw\naffinity-nodeport-transition-fjqxq\naffinity-nodeport-transition-fjqxq\naffinity-nodeport-transition-jkg7c\naffinity-nodeport-transition-jkg7c\naffinity-nodeport-transition-jkg7c\naffinity-nodeport-transition-vfcjw" Oct 14 15:06:57.195: INFO: Received response from host: affinity-nodeport-transition-fjqxq Oct 14 15:06:57.196: INFO: Received response from host: affinity-nodeport-transition-jkg7c Oct 14 15:06:57.196: INFO: Received response from host: affinity-nodeport-transition-fjqxq Oct 14 15:06:57.196: INFO: Received response from host: affinity-nodeport-transition-fjqxq Oct 14 15:06:57.196: INFO: Received response from host: affinity-nodeport-transition-jkg7c Oct 14 15:06:57.196: INFO: Received response from host: affinity-nodeport-transition-fjqxq Oct 14 15:06:57.196: INFO: Received response from host: affinity-nodeport-transition-vfcjw Oct 14 15:06:57.196: INFO: Received response from host: affinity-nodeport-transition-fjqxq Oct 14 15:06:57.196: INFO: Received response from host: affinity-nodeport-transition-fjqxq Oct 14 15:06:57.196: INFO: Received response from host: affinity-nodeport-transition-vfcjw Oct 14 15:06:57.196: INFO: Received response from host: affinity-nodeport-transition-fjqxq Oct 14 15:06:57.196: INFO: Received response from host: affinity-nodeport-transition-fjqxq Oct 14 15:06:57.196: INFO: Received response from host: affinity-nodeport-transition-jkg7c Oct 14 15:06:57.197: INFO: Received response from host: affinity-nodeport-transition-jkg7c Oct 14 15:06:57.197: INFO: Received response from host: affinity-nodeport-transition-jkg7c Oct 14 15:06:57.197: INFO: Received response from host: affinity-nodeport-transition-vfcjw Oct 14 15:06:57.219: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-2664 execpod-affinity2k5v2 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.15:31827/ ; done' Oct 14 15:06:58.825: INFO: stderr: "I1014 15:06:58.581242 3117 log.go:181] (0x30f60e0) (0x30f6150) Create stream\nI1014 15:06:58.585266 3117 log.go:181] (0x30f60e0) (0x30f6150) Stream added, broadcasting: 1\nI1014 15:06:58.606878 3117 log.go:181] (0x30f60e0) Reply frame received for 1\nI1014 15:06:58.607444 3117 log.go:181] (0x30f60e0) (0x273a460) Create stream\nI1014 15:06:58.607513 3117 log.go:181] (0x30f60e0) (0x273a460) Stream added, broadcasting: 3\nI1014 15:06:58.608762 3117 log.go:181] (0x30f60e0) Reply frame received for 3\nI1014 15:06:58.609023 3117 log.go:181] (0x30f60e0) (0x30f61c0) Create stream\nI1014 15:06:58.609089 3117 log.go:181] (0x30f60e0) (0x30f61c0) Stream added, broadcasting: 5\nI1014 15:06:58.609979 3117 log.go:181] (0x30f60e0) Reply frame received for 5\nI1014 15:06:58.704156 3117 log.go:181] (0x30f60e0) Data frame received for 5\nI1014 15:06:58.704417 3117 log.go:181] (0x30f61c0) (5) Data frame handling\nI1014 15:06:58.704682 3117 log.go:181] (0x30f60e0) Data frame received for 3\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31827/\nI1014 15:06:58.705046 3117 log.go:181] (0x273a460) (3) Data frame handling\nI1014 15:06:58.705277 3117 log.go:181] (0x30f61c0) (5) Data frame sent\nI1014 15:06:58.705484 3117 log.go:181] (0x273a460) (3) Data frame sent\nI1014 15:06:58.710213 3117 log.go:181] (0x30f60e0) Data frame received for 3\nI1014 15:06:58.710406 3117 log.go:181] (0x273a460) (3) Data frame handling\nI1014 15:06:58.710630 3117 log.go:181] (0x273a460) (3) Data frame sent\nI1014 15:06:58.711363 3117 log.go:181] (0x30f60e0) Data frame received for 3\nI1014 15:06:58.711442 3117 log.go:181] (0x30f60e0) Data frame received for 5\nI1014 15:06:58.711572 3117 log.go:181] (0x30f61c0) (5) Data frame handling\n+ echo\nI1014 15:06:58.711693 3117 log.go:181] (0x273a460) (3) Data frame handling\nI1014 15:06:58.711866 3117 log.go:181] (0x30f61c0) (5) Data frame sent\nI1014 15:06:58.712012 3117 log.go:181] (0x30f60e0) Data frame received for 5\nI1014 15:06:58.712153 3117 log.go:181] (0x30f61c0) (5) Data frame handling\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31827/\nI1014 15:06:58.712282 3117 log.go:181] (0x273a460) (3) Data frame sent\nI1014 15:06:58.712416 3117 log.go:181] (0x30f61c0) (5) Data frame sent\nI1014 15:06:58.718904 3117 log.go:181] (0x30f60e0) Data frame received for 3\nI1014 15:06:58.719049 3117 log.go:181] (0x273a460) (3) Data frame handling\nI1014 15:06:58.719213 3117 log.go:181] (0x273a460) (3) Data frame sent\nI1014 15:06:58.719647 3117 log.go:181] (0x30f60e0) Data frame received for 5\nI1014 15:06:58.719771 3117 log.go:181] (0x30f61c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2I1014 15:06:58.719890 3117 log.go:181] (0x30f60e0) Data frame received for 3\nI1014 15:06:58.720063 3117 log.go:181] (0x273a460) (3) Data frame handling\nI1014 15:06:58.720255 3117 log.go:181] (0x30f61c0) (5) Data frame sent\nI1014 15:06:58.720427 3117 log.go:181] (0x30f60e0) Data frame received for 5\nI1014 15:06:58.720527 3117 log.go:181] (0x273a460) (3) Data frame sent\nI1014 15:06:58.720658 3117 log.go:181] (0x30f61c0) (5) Data frame handling\nI1014 15:06:58.720774 3117 log.go:181] (0x30f61c0) (5) Data frame sent\n http://172.18.0.15:31827/\nI1014 15:06:58.727667 3117 log.go:181] (0x30f60e0) Data frame received for 3\nI1014 15:06:58.727837 3117 log.go:181] (0x273a460) (3) Data frame handling\nI1014 15:06:58.727997 3117 log.go:181] (0x273a460) (3) Data frame sent\nI1014 15:06:58.728132 3117 log.go:181] (0x30f60e0) Data frame received for 3\nI1014 15:06:58.728268 3117 log.go:181] (0x30f60e0) Data frame received for 5\nI1014 15:06:58.728502 3117 log.go:181] (0x30f61c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31827/\nI1014 15:06:58.728972 3117 log.go:181] (0x273a460) (3) Data frame handling\nI1014 15:06:58.729257 3117 log.go:181] (0x273a460) (3) Data frame sent\nI1014 15:06:58.729433 3117 log.go:181] (0x30f61c0) (5) Data frame sent\nI1014 15:06:58.732308 3117 log.go:181] (0x30f60e0) Data frame received for 3\nI1014 15:06:58.732417 3117 log.go:181] (0x273a460) (3) Data frame handling\nI1014 15:06:58.732529 3117 log.go:181] (0x273a460) (3) Data frame sent\nI1014 15:06:58.733588 3117 log.go:181] (0x30f60e0) Data frame received for 3\nI1014 15:06:58.733719 3117 log.go:181] (0x273a460) (3) Data frame handling\nI1014 15:06:58.733867 3117 log.go:181] (0x273a460) (3) Data frame sent\nI1014 15:06:58.733996 3117 log.go:181] (0x30f60e0) Data frame received for 5\nI1014 15:06:58.734108 3117 log.go:181] (0x30f61c0) (5) Data frame handling\nI1014 15:06:58.734248 3117 log.go:181] (0x30f61c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31827/\nI1014 15:06:58.738855 3117 log.go:181] (0x30f60e0) Data frame received for 3\nI1014 15:06:58.738953 3117 log.go:181] (0x273a460) (3) Data frame handling\nI1014 15:06:58.739048 3117 log.go:181] (0x273a460) (3) Data frame sent\nI1014 15:06:58.739817 3117 log.go:181] (0x30f60e0) Data frame received for 5\nI1014 15:06:58.739938 3117 log.go:181] (0x30f61c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31827/\nI1014 15:06:58.740039 3117 log.go:181] (0x30f60e0) Data frame received for 3\nI1014 15:06:58.740163 3117 log.go:181] (0x273a460) (3) Data frame handling\nI1014 15:06:58.740258 3117 log.go:181] (0x30f61c0) (5) Data frame sent\nI1014 15:06:58.740374 3117 log.go:181] (0x273a460) (3) Data frame sent\nI1014 15:06:58.744560 3117 log.go:181] (0x30f60e0) Data frame received for 3\nI1014 15:06:58.744661 3117 log.go:181] (0x273a460) (3) Data frame handling\nI1014 15:06:58.744781 3117 log.go:181] (0x273a460) (3) Data frame sent\nI1014 15:06:58.745524 3117 log.go:181] (0x30f60e0) Data frame received for 5\nI1014 15:06:58.745642 3117 log.go:181] (0x30f61c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31827/\nI1014 15:06:58.745753 3117 log.go:181] (0x30f60e0) Data frame received for 3\nI1014 15:06:58.745854 3117 log.go:181] (0x273a460) (3) Data frame handling\nI1014 15:06:58.745956 3117 log.go:181] (0x273a460) (3) Data frame sent\nI1014 15:06:58.746062 3117 log.go:181] (0x30f61c0) (5) Data frame sent\nI1014 15:06:58.750591 3117 log.go:181] (0x30f60e0) Data frame received for 3\nI1014 15:06:58.750663 3117 log.go:181] (0x273a460) (3) Data frame handling\nI1014 15:06:58.750751 3117 log.go:181] (0x273a460) (3) Data frame sent\nI1014 15:06:58.751113 3117 log.go:181] (0x30f60e0) Data frame received for 5\nI1014 15:06:58.751183 3117 log.go:181] (0x30f61c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31827/\nI1014 15:06:58.751268 3117 log.go:181] (0x30f60e0) Data frame received for 3\nI1014 15:06:58.751386 3117 log.go:181] (0x273a460) (3) Data frame handling\nI1014 15:06:58.751521 3117 log.go:181] (0x30f61c0) (5) Data frame sent\nI1014 15:06:58.751638 3117 log.go:181] (0x273a460) (3) Data frame sent\nI1014 15:06:58.756498 3117 log.go:181] (0x30f60e0) Data frame received for 3\nI1014 15:06:58.756604 3117 log.go:181] (0x273a460) (3) Data frame handling\nI1014 15:06:58.756736 3117 log.go:181] (0x273a460) (3) Data frame sent\nI1014 15:06:58.757447 3117 log.go:181] (0x30f60e0) Data frame received for 3\nI1014 15:06:58.757565 3117 log.go:181] (0x273a460) (3) Data frame handling\nI1014 15:06:58.757661 3117 log.go:181] (0x30f60e0) Data frame received for 5\nI1014 15:06:58.757782 3117 log.go:181] (0x30f61c0) (5) Data frame handling\nI1014 15:06:58.757894 3117 log.go:181] (0x30f61c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31827/\nI1014 15:06:58.757975 3117 log.go:181] (0x273a460) (3) Data frame sent\nI1014 15:06:58.762363 3117 log.go:181] (0x30f60e0) Data frame received for 3\nI1014 15:06:58.762429 3117 log.go:181] (0x273a460) (3) Data frame handling\nI1014 15:06:58.762502 3117 log.go:181] (0x273a460) (3) Data frame sent\nI1014 15:06:58.763171 3117 log.go:181] (0x30f60e0) Data frame received for 3\nI1014 15:06:58.763300 3117 log.go:181] (0x273a460) (3) Data frame handling\nI1014 15:06:58.763391 3117 log.go:181] (0x30f60e0) Data frame received for 5\nI1014 15:06:58.763504 3117 log.go:181] (0x30f61c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31827/\nI1014 15:06:58.763604 3117 log.go:181] (0x273a460) (3) Data frame sent\nI1014 15:06:58.763697 3117 log.go:181] (0x30f61c0) (5) Data frame sent\nI1014 15:06:58.769764 3117 log.go:181] (0x30f60e0) Data frame received for 3\nI1014 15:06:58.769834 3117 log.go:181] (0x273a460) (3) Data frame handling\nI1014 15:06:58.769930 3117 log.go:181] (0x273a460) (3) Data frame sent\nI1014 15:06:58.770529 3117 log.go:181] (0x30f60e0) Data frame received for 5\nI1014 15:06:58.770636 3117 log.go:181] (0x30f61c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31827/\nI1014 15:06:58.770696 3117 log.go:181] (0x30f60e0) Data frame received for 3\nI1014 15:06:58.770779 3117 log.go:181] (0x273a460) (3) Data frame handling\nI1014 15:06:58.770832 3117 log.go:181] (0x30f61c0) (5) Data frame sent\nI1014 15:06:58.770924 3117 log.go:181] (0x273a460) (3) Data frame sent\nI1014 15:06:58.774826 3117 log.go:181] (0x30f60e0) Data frame received for 3\nI1014 15:06:58.774906 3117 log.go:181] (0x273a460) (3) Data frame handling\nI1014 15:06:58.775012 3117 log.go:181] (0x273a460) (3) Data frame sent\nI1014 15:06:58.775801 3117 log.go:181] (0x30f60e0) Data frame received for 5\nI1014 15:06:58.775910 3117 log.go:181] (0x30f61c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31827/\nI1014 15:06:58.776019 3117 log.go:181] (0x30f60e0) Data frame received for 3\nI1014 15:06:58.776167 3117 log.go:181] (0x273a460) (3) Data frame handling\nI1014 15:06:58.776313 3117 log.go:181] (0x273a460) (3) Data frame sent\nI1014 15:06:58.776429 3117 log.go:181] (0x30f61c0) (5) Data frame sent\nI1014 15:06:58.780466 3117 log.go:181] (0x30f60e0) Data frame received for 3\nI1014 15:06:58.780578 3117 log.go:181] (0x273a460) (3) Data frame handling\nI1014 15:06:58.780712 3117 log.go:181] (0x273a460) (3) Data frame sent\nI1014 15:06:58.781326 3117 log.go:181] (0x30f60e0) Data frame received for 5\nI1014 15:06:58.781486 3117 log.go:181] (0x30f61c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31827/\nI1014 15:06:58.781611 3117 log.go:181] (0x30f60e0) Data frame received for 3\nI1014 15:06:58.781768 3117 log.go:181] (0x273a460) (3) Data frame handling\nI1014 15:06:58.781905 3117 log.go:181] (0x30f61c0) (5) Data frame sent\nI1014 15:06:58.782001 3117 log.go:181] (0x273a460) (3) Data frame sent\nI1014 15:06:58.787205 3117 log.go:181] (0x30f60e0) Data frame received for 3\nI1014 15:06:58.787317 3117 log.go:181] (0x273a460) (3) Data frame handling\nI1014 15:06:58.787463 3117 log.go:181] (0x273a460) (3) Data frame sent\nI1014 15:06:58.788283 3117 log.go:181] (0x30f60e0) Data frame received for 3\nI1014 15:06:58.788383 3117 log.go:181] (0x273a460) (3) Data frame handling\nI1014 15:06:58.788485 3117 log.go:181] (0x273a460) (3) Data frame sent\nI1014 15:06:58.788564 3117 log.go:181] (0x30f60e0) Data frame received for 5\nI1014 15:06:58.788637 3117 log.go:181] (0x30f61c0) (5) Data frame handling\nI1014 15:06:58.788731 3117 log.go:181] (0x30f61c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31827/\nI1014 15:06:58.794709 3117 log.go:181] (0x30f60e0) Data frame received for 3\nI1014 15:06:58.794836 3117 log.go:181] (0x273a460) (3) Data frame handling\nI1014 15:06:58.794973 3117 log.go:181] (0x273a460) (3) Data frame sent\nI1014 15:06:58.795621 3117 log.go:181] (0x30f60e0) Data frame received for 5\nI1014 15:06:58.795733 3117 log.go:181] (0x30f61c0) (5) Data frame handling\nI1014 15:06:58.795812 3117 log.go:181] (0x30f61c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31827/\nI1014 15:06:58.795903 3117 log.go:181] (0x30f60e0) Data frame received for 3\nI1014 15:06:58.795997 3117 log.go:181] (0x273a460) (3) Data frame handling\nI1014 15:06:58.796102 3117 log.go:181] (0x273a460) (3) Data frame sent\nI1014 15:06:58.800078 3117 log.go:181] (0x30f60e0) Data frame received for 3\nI1014 15:06:58.800242 3117 log.go:181] (0x273a460) (3) Data frame handling\nI1014 15:06:58.800434 3117 log.go:181] (0x273a460) (3) Data frame sent\nI1014 15:06:58.800943 3117 log.go:181] (0x30f60e0) Data frame received for 3\nI1014 15:06:58.801083 3117 log.go:181] (0x273a460) (3) Data frame handling\nI1014 15:06:58.801167 3117 log.go:181] (0x30f60e0) Data frame received for 5\nI1014 15:06:58.801345 3117 log.go:181] (0x30f61c0) (5) Data frame handling\nI1014 15:06:58.801515 3117 log.go:181] (0x30f61c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:31827/\nI1014 15:06:58.801619 3117 log.go:181] (0x273a460) (3) Data frame sent\nI1014 15:06:58.805706 3117 log.go:181] (0x30f60e0) Data frame received for 3\nI1014 15:06:58.805823 3117 log.go:181] (0x273a460) (3) Data frame handling\nI1014 15:06:58.805946 3117 log.go:181] (0x273a460) (3) Data frame sent\nI1014 15:06:58.806711 3117 log.go:181] (0x30f60e0) Data frame received for 5\nI1014 15:06:58.806853 3117 log.go:181] (0x30f61c0) (5) Data frame handling\nI1014 15:06:58.807001 3117 log.go:181] (0x30f60e0) Data frame received for 3\nI1014 15:06:58.807153 3117 log.go:181] (0x273a460) (3) Data frame handling\nI1014 15:06:58.808522 3117 log.go:181] (0x30f60e0) Data frame received for 1\nI1014 15:06:58.808632 3117 log.go:181] (0x30f6150) (1) Data frame handling\nI1014 15:06:58.808727 3117 log.go:181] (0x30f6150) (1) Data frame sent\nI1014 15:06:58.809528 3117 log.go:181] (0x30f60e0) (0x30f6150) Stream removed, broadcasting: 1\nI1014 15:06:58.811920 3117 log.go:181] (0x30f60e0) Go away received\nI1014 15:06:58.815151 3117 log.go:181] (0x30f60e0) (0x30f6150) Stream removed, broadcasting: 1\nI1014 15:06:58.815359 3117 log.go:181] (0x30f60e0) (0x273a460) Stream removed, broadcasting: 3\nI1014 15:06:58.815547 3117 log.go:181] (0x30f60e0) (0x30f61c0) Stream removed, broadcasting: 5\n" Oct 14 15:06:58.832: INFO: stdout: "\naffinity-nodeport-transition-vfcjw\naffinity-nodeport-transition-vfcjw\naffinity-nodeport-transition-vfcjw\naffinity-nodeport-transition-vfcjw\naffinity-nodeport-transition-vfcjw\naffinity-nodeport-transition-vfcjw\naffinity-nodeport-transition-vfcjw\naffinity-nodeport-transition-vfcjw\naffinity-nodeport-transition-vfcjw\naffinity-nodeport-transition-vfcjw\naffinity-nodeport-transition-vfcjw\naffinity-nodeport-transition-vfcjw\naffinity-nodeport-transition-vfcjw\naffinity-nodeport-transition-vfcjw\naffinity-nodeport-transition-vfcjw\naffinity-nodeport-transition-vfcjw" Oct 14 15:06:58.832: INFO: Received response from host: affinity-nodeport-transition-vfcjw Oct 14 15:06:58.832: INFO: Received response from host: affinity-nodeport-transition-vfcjw Oct 14 15:06:58.832: INFO: Received response from host: affinity-nodeport-transition-vfcjw Oct 14 15:06:58.832: INFO: Received response from host: affinity-nodeport-transition-vfcjw Oct 14 15:06:58.832: INFO: Received response from host: affinity-nodeport-transition-vfcjw Oct 14 15:06:58.833: INFO: Received response from host: affinity-nodeport-transition-vfcjw Oct 14 15:06:58.833: INFO: Received response from host: affinity-nodeport-transition-vfcjw Oct 14 15:06:58.833: INFO: Received response from host: affinity-nodeport-transition-vfcjw Oct 14 15:06:58.833: INFO: Received response from host: affinity-nodeport-transition-vfcjw Oct 14 15:06:58.833: INFO: Received response from host: affinity-nodeport-transition-vfcjw Oct 14 15:06:58.833: INFO: Received response from host: affinity-nodeport-transition-vfcjw Oct 14 15:06:58.833: INFO: Received response from host: affinity-nodeport-transition-vfcjw Oct 14 15:06:58.833: INFO: Received response from host: affinity-nodeport-transition-vfcjw Oct 14 15:06:58.833: INFO: Received response from host: affinity-nodeport-transition-vfcjw Oct 14 15:06:58.833: INFO: Received response from host: affinity-nodeport-transition-vfcjw Oct 14 15:06:58.833: INFO: Received response from host: affinity-nodeport-transition-vfcjw Oct 14 15:06:58.833: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-2664, will wait for the garbage collector to delete the pods Oct 14 15:06:58.974: INFO: Deleting ReplicationController affinity-nodeport-transition took: 8.291858ms Oct 14 15:06:59.375: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 401.057534ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:07:15.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2664" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:41.314 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":303,"completed":219,"skipped":3566,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:07:15.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should create and stop a working application [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating all guestbook components Oct 14 15:07:15.972: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend Oct 14 15:07:15.973: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7643' Oct 14 15:07:18.397: INFO: stderr: "" Oct 14 15:07:18.397: INFO: stdout: "service/agnhost-replica created\n" Oct 14 15:07:18.398: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend Oct 14 15:07:18.398: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7643' Oct 14 15:07:20.763: INFO: stderr: "" Oct 14 15:07:20.763: INFO: stdout: "service/agnhost-primary created\n" Oct 14 15:07:20.764: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Oct 14 15:07:20.764: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7643' Oct 14 15:07:23.077: INFO: stderr: "" Oct 14 15:07:23.078: INFO: stdout: "service/frontend created\n" Oct 14 15:07:23.082: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Oct 14 15:07:23.083: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7643' Oct 14 15:07:25.417: INFO: stderr: "" Oct 14 15:07:25.417: INFO: stdout: "deployment.apps/frontend created\n" Oct 14 15:07:25.419: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Oct 14 15:07:25.419: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7643' Oct 14 15:07:29.231: INFO: stderr: "" Oct 14 15:07:29.231: INFO: stdout: "deployment.apps/agnhost-primary created\n" Oct 14 15:07:29.233: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Oct 14 15:07:29.233: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7643' Oct 14 15:07:32.936: INFO: stderr: "" Oct 14 15:07:32.937: INFO: stdout: "deployment.apps/agnhost-replica created\n" STEP: validating guestbook app Oct 14 15:07:32.937: INFO: Waiting for all frontend pods to be Running. Oct 14 15:07:32.989: INFO: Waiting for frontend to serve content. Oct 14 15:07:34.240: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: Oct 14 15:07:39.251: INFO: Trying to add a new entry to the guestbook. Oct 14 15:07:39.267: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Oct 14 15:07:39.278: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7643' Oct 14 15:07:40.534: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 14 15:07:40.535: INFO: stdout: "service \"agnhost-replica\" force deleted\n" STEP: using delete to clean up resources Oct 14 15:07:40.535: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7643' Oct 14 15:07:41.814: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 14 15:07:41.814: INFO: stdout: "service \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Oct 14 15:07:41.816: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7643' Oct 14 15:07:43.118: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 14 15:07:43.119: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Oct 14 15:07:43.120: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7643' Oct 14 15:07:44.334: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 14 15:07:44.334: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Oct 14 15:07:44.336: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7643' Oct 14 15:07:45.617: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 14 15:07:45.617: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Oct 14 15:07:45.619: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7643' Oct 14 15:07:46.987: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 14 15:07:46.988: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:07:46.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7643" for this suite. • [SLOW TEST:31.205 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:351 should create and stop a working application [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":303,"completed":220,"skipped":3575,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:07:47.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 15:07:47.958: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Oct 14 15:08:08.058: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2912 create -f -' Oct 14 15:08:13.920: INFO: stderr: "" Oct 14 15:08:13.920: INFO: stdout: "e2e-test-crd-publish-openapi-8843-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Oct 14 15:08:13.921: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2912 delete e2e-test-crd-publish-openapi-8843-crds test-cr' Oct 14 15:08:15.175: INFO: stderr: "" Oct 14 15:08:15.175: INFO: stdout: "e2e-test-crd-publish-openapi-8843-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Oct 14 15:08:15.176: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2912 apply -f -' Oct 14 15:08:18.345: INFO: stderr: "" Oct 14 15:08:18.345: INFO: stdout: "e2e-test-crd-publish-openapi-8843-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Oct 14 15:08:18.345: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2912 delete e2e-test-crd-publish-openapi-8843-crds test-cr' Oct 14 15:08:19.643: INFO: stderr: "" Oct 14 15:08:19.643: INFO: stdout: "e2e-test-crd-publish-openapi-8843-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Oct 14 15:08:19.644: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8843-crds' Oct 14 15:08:23.649: INFO: stderr: "" Oct 14 15:08:23.649: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8843-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:08:43.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2912" for this suite. • [SLOW TEST:56.406 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":303,"completed":221,"skipped":3588,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:08:43.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should run through the lifecycle of a ServiceAccount [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a ServiceAccount STEP: watching for the ServiceAccount to be added STEP: patching the ServiceAccount STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) STEP: deleting the ServiceAccount [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:08:43.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-5938" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":303,"completed":222,"skipped":3617,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:08:43.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 15:08:43.849: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:08:48.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4373" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":303,"completed":223,"skipped":3664,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:08:48.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-6877.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-6877.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6877.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-6877.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-6877.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6877.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 14 15:08:54.258: INFO: DNS probes using dns-6877/dns-test-68e2f434-f18f-46f8-8976-05aa9deba4d4 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:08:54.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6877" for this suite. • [SLOW TEST:6.448 seconds] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":303,"completed":224,"skipped":3677,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:08:54.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 14 15:09:05.933: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 14 15:09:08.181: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738284945, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738284945, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738284945, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738284945, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 14 15:09:11.293: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 15:09:11.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:09:12.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3243" for this suite. STEP: Destroying namespace "webhook-3243-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.127 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":303,"completed":225,"skipped":3698,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:09:12.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:09:16.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7293" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":226,"skipped":3728,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:09:16.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 14 15:09:16.941: INFO: Waiting up to 5m0s for pod "downwardapi-volume-52930355-7efb-411c-a8bf-f3b6198719db" in namespace "downward-api-2781" to be "Succeeded or Failed" Oct 14 15:09:16.983: INFO: Pod "downwardapi-volume-52930355-7efb-411c-a8bf-f3b6198719db": Phase="Pending", Reason="", readiness=false. Elapsed: 42.040619ms Oct 14 15:09:19.005: INFO: Pod "downwardapi-volume-52930355-7efb-411c-a8bf-f3b6198719db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06370435s Oct 14 15:09:21.012: INFO: Pod "downwardapi-volume-52930355-7efb-411c-a8bf-f3b6198719db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.071263356s STEP: Saw pod success Oct 14 15:09:21.013: INFO: Pod "downwardapi-volume-52930355-7efb-411c-a8bf-f3b6198719db" satisfied condition "Succeeded or Failed" Oct 14 15:09:21.018: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-52930355-7efb-411c-a8bf-f3b6198719db container client-container: STEP: delete the pod Oct 14 15:09:21.040: INFO: Waiting for pod downwardapi-volume-52930355-7efb-411c-a8bf-f3b6198719db to disappear Oct 14 15:09:21.044: INFO: Pod downwardapi-volume-52930355-7efb-411c-a8bf-f3b6198719db no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:09:21.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2781" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":303,"completed":227,"skipped":3755,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:09:21.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create deployment with httpd image Oct 14 15:09:21.326: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config create -f -' Oct 14 15:09:23.346: INFO: stderr: "" Oct 14 15:09:23.346: INFO: stdout: "deployment.apps/httpd-deployment created\n" STEP: verify diff finds difference between live and declared image Oct 14 15:09:23.347: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config diff -f -' Oct 14 15:09:26.668: INFO: rc: 1 Oct 14 15:09:26.669: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config delete -f -' Oct 14 15:09:27.939: INFO: stderr: "" Oct 14 15:09:27.939: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:09:27.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-485" for this suite. • [SLOW TEST:6.906 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl diff /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:888 should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":303,"completed":228,"skipped":3768,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:09:27.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test env composition Oct 14 15:09:28.059: INFO: Waiting up to 5m0s for pod "var-expansion-0b6678a7-ce7d-4108-8ef7-da5ad3f61ce8" in namespace "var-expansion-2988" to be "Succeeded or Failed" Oct 14 15:09:28.078: INFO: Pod "var-expansion-0b6678a7-ce7d-4108-8ef7-da5ad3f61ce8": Phase="Pending", Reason="", readiness=false. Elapsed: 18.3543ms Oct 14 15:09:30.086: INFO: Pod "var-expansion-0b6678a7-ce7d-4108-8ef7-da5ad3f61ce8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026529168s Oct 14 15:09:32.094: INFO: Pod "var-expansion-0b6678a7-ce7d-4108-8ef7-da5ad3f61ce8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034455049s STEP: Saw pod success Oct 14 15:09:32.094: INFO: Pod "var-expansion-0b6678a7-ce7d-4108-8ef7-da5ad3f61ce8" satisfied condition "Succeeded or Failed" Oct 14 15:09:32.099: INFO: Trying to get logs from node latest-worker pod var-expansion-0b6678a7-ce7d-4108-8ef7-da5ad3f61ce8 container dapi-container: STEP: delete the pod Oct 14 15:09:32.259: INFO: Waiting for pod var-expansion-0b6678a7-ce7d-4108-8ef7-da5ad3f61ce8 to disappear Oct 14 15:09:32.284: INFO: Pod var-expansion-0b6678a7-ce7d-4108-8ef7-da5ad3f61ce8 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:09:32.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2988" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":303,"completed":229,"skipped":3822,"failed":0} SSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Events /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:09:32.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Oct 14 15:09:36.555: INFO: &Pod{ObjectMeta:{send-events-5a260e9d-ad22-48ce-b78c-9bfa5026981e events-5134 /api/v1/namespaces/events-5134/pods/send-events-5a260e9d-ad22-48ce-b78c-9bfa5026981e b99dc859-4c7d-4860-bab9-a1c26733baad 1154279 0 2020-10-14 15:09:32 +0000 UTC map[name:foo time:462594070] map[] [] [] [{e2e.test Update v1 2020-10-14 15:09:32 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-14 15:09:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.157\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jvhvj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jvhvj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:p,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jvhvj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 15:09:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 15:09:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 15:09:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 15:09:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.2.157,StartTime:2020-10-14 15:09:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-14 15:09:34 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://bd8eb6fd01dbd2e68a0dadc01a08b8ba5ed7c268b56b8dcfaac7bd92ec2de6b0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.157,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Oct 14 15:09:38.568: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Oct 14 15:09:40.579: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:09:40.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-5134" for this suite. • [SLOW TEST:8.276 seconds] [k8s.io] [sig-node] Events /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":303,"completed":230,"skipped":3827,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:09:40.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should create services for rc [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC Oct 14 15:09:40.740: INFO: namespace kubectl-1217 Oct 14 15:09:40.741: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1217' Oct 14 15:09:44.633: INFO: stderr: "" Oct 14 15:09:44.634: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Oct 14 15:09:45.645: INFO: Selector matched 1 pods for map[app:agnhost] Oct 14 15:09:45.645: INFO: Found 0 / 1 Oct 14 15:09:46.643: INFO: Selector matched 1 pods for map[app:agnhost] Oct 14 15:09:46.644: INFO: Found 0 / 1 Oct 14 15:09:47.856: INFO: Selector matched 1 pods for map[app:agnhost] Oct 14 15:09:47.857: INFO: Found 0 / 1 Oct 14 15:09:48.642: INFO: Selector matched 1 pods for map[app:agnhost] Oct 14 15:09:48.642: INFO: Found 1 / 1 Oct 14 15:09:48.642: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Oct 14 15:09:48.648: INFO: Selector matched 1 pods for map[app:agnhost] Oct 14 15:09:48.648: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Oct 14 15:09:48.649: INFO: wait on agnhost-primary startup in kubectl-1217 Oct 14 15:09:48.649: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config logs agnhost-primary-r28ph agnhost-primary --namespace=kubectl-1217' Oct 14 15:09:49.991: INFO: stderr: "" Oct 14 15:09:49.992: INFO: stdout: "Paused\n" STEP: exposing RC Oct 14 15:09:49.993: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-1217' Oct 14 15:09:51.413: INFO: stderr: "" Oct 14 15:09:51.413: INFO: stdout: "service/rm2 exposed\n" Oct 14 15:09:51.418: INFO: Service rm2 in namespace kubectl-1217 found. STEP: exposing service Oct 14 15:09:53.437: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-1217' Oct 14 15:09:54.752: INFO: stderr: "" Oct 14 15:09:54.752: INFO: stdout: "service/rm3 exposed\n" Oct 14 15:09:54.764: INFO: Service rm3 in namespace kubectl-1217 found. [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:09:56.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1217" for this suite. • [SLOW TEST:16.143 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1246 should create services for rc [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":303,"completed":231,"skipped":3828,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:09:56.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 14 15:09:56.902: INFO: Waiting up to 5m0s for pod "downwardapi-volume-008e2329-ca35-42e3-8fde-26ab582a1987" in namespace "downward-api-3535" to be "Succeeded or Failed" Oct 14 15:09:56.938: INFO: Pod "downwardapi-volume-008e2329-ca35-42e3-8fde-26ab582a1987": Phase="Pending", Reason="", readiness=false. Elapsed: 35.780536ms Oct 14 15:09:58.946: INFO: Pod "downwardapi-volume-008e2329-ca35-42e3-8fde-26ab582a1987": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044249108s Oct 14 15:10:00.955: INFO: Pod "downwardapi-volume-008e2329-ca35-42e3-8fde-26ab582a1987": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053183925s STEP: Saw pod success Oct 14 15:10:00.955: INFO: Pod "downwardapi-volume-008e2329-ca35-42e3-8fde-26ab582a1987" satisfied condition "Succeeded or Failed" Oct 14 15:10:00.960: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-008e2329-ca35-42e3-8fde-26ab582a1987 container client-container: STEP: delete the pod Oct 14 15:10:01.005: INFO: Waiting for pod downwardapi-volume-008e2329-ca35-42e3-8fde-26ab582a1987 to disappear Oct 14 15:10:01.013: INFO: Pod downwardapi-volume-008e2329-ca35-42e3-8fde-26ab582a1987 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:10:01.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3535" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":232,"skipped":3833,"failed":0} ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:10:01.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Oct 14 15:10:09.250: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Oct 14 15:10:09.269: INFO: Pod pod-with-poststart-http-hook still exists Oct 14 15:10:11.270: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Oct 14 15:10:11.279: INFO: Pod pod-with-poststart-http-hook still exists Oct 14 15:10:13.270: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Oct 14 15:10:13.278: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:10:13.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-652" for this suite. • [SLOW TEST:12.193 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":303,"completed":233,"skipped":3833,"failed":0} [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:10:13.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check is all data is printed [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 15:10:13.440: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config version' Oct 14 15:10:14.638: INFO: stderr: "" Oct 14 15:10:14.638: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.3-rc.0\", GitCommit:\"d60a97015628047ffba1adebed86432370c354bc\", GitTreeState:\"clean\", BuildDate:\"2020-09-16T14:01:27Z\", GoVersion:\"go1.15\", Compiler:\"gc\", Platform:\"linux/arm\"}\nServer Version: version.Info{Major:\"1\", Minor:\"19\", GitVersion:\"v1.19.0\", GitCommit:\"e19964183377d0ec2052d1f1fa930c4d7575bd50\", GitTreeState:\"clean\", BuildDate:\"2020-08-28T22:11:08Z\", GoVersion:\"go1.15\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:10:14.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4931" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":303,"completed":234,"skipped":3833,"failed":0} SSSSSSSSS ------------------------------ [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:10:14.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should run through a ConfigMap lifecycle [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a ConfigMap STEP: fetching the ConfigMap STEP: patching the ConfigMap STEP: listing all ConfigMaps in all namespaces with a label selector STEP: deleting the ConfigMap by collection with a label selector STEP: listing all ConfigMaps in test namespace [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:10:14.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4556" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":303,"completed":235,"skipped":3842,"failed":0} ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:10:14.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 14 15:10:14.922: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 14 15:10:15.016: INFO: Waiting for terminating namespaces to be deleted... Oct 14 15:10:15.023: INFO: Logging pods the apiserver thinks is on node latest-worker before test Oct 14 15:10:15.036: INFO: pod-handle-http-request from container-lifecycle-hook-652 started at 2020-10-14 15:10:01 +0000 UTC (1 container statuses recorded) Oct 14 15:10:15.037: INFO: Container pod-handle-http-request ready: true, restart count 0 Oct 14 15:10:15.037: INFO: send-events-5a260e9d-ad22-48ce-b78c-9bfa5026981e from events-5134 started at 2020-10-14 15:09:32 +0000 UTC (1 container statuses recorded) Oct 14 15:10:15.037: INFO: Container p ready: false, restart count 0 Oct 14 15:10:15.037: INFO: kindnet-jwscz from kube-system started at 2020-10-10 08:58:57 +0000 UTC (1 container statuses recorded) Oct 14 15:10:15.037: INFO: Container kindnet-cni ready: true, restart count 0 Oct 14 15:10:15.037: INFO: kube-proxy-cg6dw from kube-system started at 2020-10-10 08:58:56 +0000 UTC (1 container statuses recorded) Oct 14 15:10:15.037: INFO: Container kube-proxy ready: true, restart count 0 Oct 14 15:10:15.037: INFO: agnhost-primary-r28ph from kubectl-1217 started at 2020-10-14 15:09:44 +0000 UTC (1 container statuses recorded) Oct 14 15:10:15.037: INFO: Container agnhost-primary ready: false, restart count 0 Oct 14 15:10:15.037: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Oct 14 15:10:15.048: INFO: coredns-f9fd979d6-l8q79 from kube-system started at 2020-10-10 08:59:26 +0000 UTC (1 container statuses recorded) Oct 14 15:10:15.048: INFO: Container coredns ready: true, restart count 0 Oct 14 15:10:15.049: INFO: coredns-f9fd979d6-rhzs8 from kube-system started at 2020-10-10 08:59:16 +0000 UTC (1 container statuses recorded) Oct 14 15:10:15.049: INFO: Container coredns ready: true, restart count 0 Oct 14 15:10:15.049: INFO: kindnet-g7vp5 from kube-system started at 2020-10-10 08:58:57 +0000 UTC (1 container statuses recorded) Oct 14 15:10:15.049: INFO: Container kindnet-cni ready: true, restart count 0 Oct 14 15:10:15.049: INFO: kube-proxy-bmxmj from kube-system started at 2020-10-10 08:58:56 +0000 UTC (1 container statuses recorded) Oct 14 15:10:15.049: INFO: Container kube-proxy ready: true, restart count 0 Oct 14 15:10:15.049: INFO: local-path-provisioner-78776bfc44-6tlk5 from local-path-storage started at 2020-10-10 08:59:16 +0000 UTC (1 container statuses recorded) Oct 14 15:10:15.049: INFO: Container local-path-provisioner ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-897dfb4e-16f3-4686-9889-70c2324d0063 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-897dfb4e-16f3-4686-9889-70c2324d0063 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-897dfb4e-16f3-4686-9889-70c2324d0063 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:10:23.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7737" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.806 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":303,"completed":236,"skipped":3842,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:10:23.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs Oct 14 15:10:23.741: INFO: Waiting up to 5m0s for pod "pod-31543703-56ed-4e49-9d99-f1f00510c90c" in namespace "emptydir-771" to be "Succeeded or Failed" Oct 14 15:10:23.759: INFO: Pod "pod-31543703-56ed-4e49-9d99-f1f00510c90c": Phase="Pending", Reason="", readiness=false. Elapsed: 17.782685ms Oct 14 15:10:25.773: INFO: Pod "pod-31543703-56ed-4e49-9d99-f1f00510c90c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031363938s Oct 14 15:10:27.779: INFO: Pod "pod-31543703-56ed-4e49-9d99-f1f00510c90c": Phase="Running", Reason="", readiness=true. Elapsed: 4.037842018s Oct 14 15:10:29.796: INFO: Pod "pod-31543703-56ed-4e49-9d99-f1f00510c90c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.054501967s STEP: Saw pod success Oct 14 15:10:29.796: INFO: Pod "pod-31543703-56ed-4e49-9d99-f1f00510c90c" satisfied condition "Succeeded or Failed" Oct 14 15:10:29.802: INFO: Trying to get logs from node latest-worker pod pod-31543703-56ed-4e49-9d99-f1f00510c90c container test-container: STEP: delete the pod Oct 14 15:10:29.831: INFO: Waiting for pod pod-31543703-56ed-4e49-9d99-f1f00510c90c to disappear Oct 14 15:10:29.846: INFO: Pod pod-31543703-56ed-4e49-9d99-f1f00510c90c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:10:29.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-771" for this suite. • [SLOW TEST:6.239 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":237,"skipped":3865,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:10:29.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 15:10:30.515: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"9df4b3dc-d803-416e-8cbf-618df2ac77f5", Controller:(*bool)(0xb33309a), BlockOwnerDeletion:(*bool)(0xb33309b)}} Oct 14 15:10:30.530: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"a3dad2f3-8507-450d-87b0-c3ed94aa9470", Controller:(*bool)(0xb2d3e72), BlockOwnerDeletion:(*bool)(0xb2d3e73)}} Oct 14 15:10:30.581: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"a63676ce-e77c-4fb2-b707-9a6a37e3d622", Controller:(*bool)(0xb34207a), BlockOwnerDeletion:(*bool)(0xb34207b)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:10:35.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4351" for this suite. • [SLOW TEST:5.825 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":303,"completed":238,"skipped":3873,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:10:35.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-729dac40-a472-458e-976e-c045e49463d0 STEP: Creating a pod to test consume secrets Oct 14 15:10:35.822: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-80dae363-4f6b-4e9f-a75f-742ef1388cc2" in namespace "projected-5225" to be "Succeeded or Failed" Oct 14 15:10:35.858: INFO: Pod "pod-projected-secrets-80dae363-4f6b-4e9f-a75f-742ef1388cc2": Phase="Pending", Reason="", readiness=false. Elapsed: 36.038256ms Oct 14 15:10:37.888: INFO: Pod "pod-projected-secrets-80dae363-4f6b-4e9f-a75f-742ef1388cc2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06595224s Oct 14 15:10:39.896: INFO: Pod "pod-projected-secrets-80dae363-4f6b-4e9f-a75f-742ef1388cc2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073786417s Oct 14 15:10:41.904: INFO: Pod "pod-projected-secrets-80dae363-4f6b-4e9f-a75f-742ef1388cc2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.082178783s STEP: Saw pod success Oct 14 15:10:41.904: INFO: Pod "pod-projected-secrets-80dae363-4f6b-4e9f-a75f-742ef1388cc2" satisfied condition "Succeeded or Failed" Oct 14 15:10:41.910: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-80dae363-4f6b-4e9f-a75f-742ef1388cc2 container projected-secret-volume-test: STEP: delete the pod Oct 14 15:10:41.968: INFO: Waiting for pod pod-projected-secrets-80dae363-4f6b-4e9f-a75f-742ef1388cc2 to disappear Oct 14 15:10:41.973: INFO: Pod pod-projected-secrets-80dae363-4f6b-4e9f-a75f-742ef1388cc2 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:10:41.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5225" for this suite. • [SLOW TEST:6.314 seconds] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":239,"skipped":3900,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:10:42.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Oct 14 15:10:48.105: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-8748 PodName:pod-sharedvolume-4f1ec90f-a0a6-4e99-87ff-c646b02d0846 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 14 15:10:48.106: INFO: >>> kubeConfig: /root/.kube/config I1014 15:10:48.216096 11 log.go:181] (0x9816230) (0x98162a0) Create stream I1014 15:10:48.216288 11 log.go:181] (0x9816230) (0x98162a0) Stream added, broadcasting: 1 I1014 15:10:48.222448 11 log.go:181] (0x9816230) Reply frame received for 1 I1014 15:10:48.222703 11 log.go:181] (0x9816230) (0x9816460) Create stream I1014 15:10:48.222826 11 log.go:181] (0x9816230) (0x9816460) Stream added, broadcasting: 3 I1014 15:10:48.224940 11 log.go:181] (0x9816230) Reply frame received for 3 I1014 15:10:48.225171 11 log.go:181] (0x9816230) (0x9816620) Create stream I1014 15:10:48.225306 11 log.go:181] (0x9816230) (0x9816620) Stream added, broadcasting: 5 I1014 15:10:48.227275 11 log.go:181] (0x9816230) Reply frame received for 5 I1014 15:10:48.321127 11 log.go:181] (0x9816230) Data frame received for 5 I1014 15:10:48.321394 11 log.go:181] (0x9816620) (5) Data frame handling I1014 15:10:48.321617 11 log.go:181] (0x9816230) Data frame received for 3 I1014 15:10:48.321766 11 log.go:181] (0x9816460) (3) Data frame handling I1014 15:10:48.321929 11 log.go:181] (0x9816460) (3) Data frame sent I1014 15:10:48.322071 11 log.go:181] (0x9816230) Data frame received for 3 I1014 15:10:48.322208 11 log.go:181] (0x9816460) (3) Data frame handling I1014 15:10:48.322539 11 log.go:181] (0x9816230) Data frame received for 1 I1014 15:10:48.322642 11 log.go:181] (0x98162a0) (1) Data frame handling I1014 15:10:48.322746 11 log.go:181] (0x98162a0) (1) Data frame sent I1014 15:10:48.322877 11 log.go:181] (0x9816230) (0x98162a0) Stream removed, broadcasting: 1 I1014 15:10:48.323039 11 log.go:181] (0x9816230) Go away received I1014 15:10:48.323521 11 log.go:181] (0x9816230) (0x98162a0) Stream removed, broadcasting: 1 I1014 15:10:48.323693 11 log.go:181] (0x9816230) (0x9816460) Stream removed, broadcasting: 3 I1014 15:10:48.323796 11 log.go:181] (0x9816230) (0x9816620) Stream removed, broadcasting: 5 Oct 14 15:10:48.323: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:10:48.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8748" for this suite. • [SLOW TEST:6.331 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 pod should support shared volumes between containers [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":303,"completed":240,"skipped":3933,"failed":0} SS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:10:48.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-f676ef09-4594-4af2-a022-1446a6b3dff9 in namespace container-probe-6537 Oct 14 15:10:52.527: INFO: Started pod liveness-f676ef09-4594-4af2-a022-1446a6b3dff9 in namespace container-probe-6537 STEP: checking the pod's current state and verifying that restartCount is present Oct 14 15:10:52.531: INFO: Initial restart count of pod liveness-f676ef09-4594-4af2-a022-1446a6b3dff9 is 0 Oct 14 15:11:18.651: INFO: Restart count of pod container-probe-6537/liveness-f676ef09-4594-4af2-a022-1446a6b3dff9 is now 1 (26.119859321s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:11:18.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6537" for this suite. • [SLOW TEST:30.390 seconds] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":303,"completed":241,"skipped":3935,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:11:18.732: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Oct 14 15:11:23.519: INFO: Successfully updated pod "annotationupdate1660ffc4-95e5-414c-9a04-fdcf0e3ecf54" [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:11:25.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-488" for this suite. • [SLOW TEST:6.889 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":303,"completed":242,"skipped":3953,"failed":0} SSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:11:25.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:11:29.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1802" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":303,"completed":243,"skipped":3960,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:11:29.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Oct 14 15:11:38.025: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Oct 14 15:11:38.056: INFO: Pod pod-with-poststart-exec-hook still exists Oct 14 15:11:40.056: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Oct 14 15:11:40.065: INFO: Pod pod-with-poststart-exec-hook still exists Oct 14 15:11:42.057: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Oct 14 15:11:42.065: INFO: Pod pod-with-poststart-exec-hook still exists Oct 14 15:11:44.057: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Oct 14 15:11:44.066: INFO: Pod pod-with-poststart-exec-hook still exists Oct 14 15:11:46.057: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Oct 14 15:11:46.065: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:11:46.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-570" for this suite. • [SLOW TEST:16.286 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":303,"completed":244,"skipped":3978,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:11:46.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Oct 14 15:11:46.265: INFO: Waiting up to 1m0s for all nodes to be ready Oct 14 15:12:46.356: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create pods that use 2/3 of node resources. Oct 14 15:12:46.411: INFO: Created pod: pod0-sched-preemption-low-priority Oct 14 15:12:46.487: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a critical pod that use same resources as that of a lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:13:18.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-1953" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:93.005 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates lower priority pod preemption by critical pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":303,"completed":245,"skipped":4033,"failed":0} SSSSSS ------------------------------ [sig-network] Services should test the lifecycle of an Endpoint [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:13:19.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should test the lifecycle of an Endpoint [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating an Endpoint STEP: waiting for available Endpoint STEP: listing all Endpoints STEP: updating the Endpoint STEP: fetching the Endpoint STEP: patching the Endpoint STEP: fetching the Endpoint STEP: deleting the Endpoint by Collection STEP: waiting for Endpoint deletion STEP: fetching the Endpoint [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:13:19.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3121" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 •{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":303,"completed":246,"skipped":4039,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:13:19.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-secret-zjmb STEP: Creating a pod to test atomic-volume-subpath Oct 14 15:13:19.699: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-zjmb" in namespace "subpath-7126" to be "Succeeded or Failed" Oct 14 15:13:19.720: INFO: Pod "pod-subpath-test-secret-zjmb": Phase="Pending", Reason="", readiness=false. Elapsed: 20.371911ms Oct 14 15:13:21.732: INFO: Pod "pod-subpath-test-secret-zjmb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032754354s Oct 14 15:13:23.739: INFO: Pod "pod-subpath-test-secret-zjmb": Phase="Running", Reason="", readiness=true. Elapsed: 4.039885958s Oct 14 15:13:25.747: INFO: Pod "pod-subpath-test-secret-zjmb": Phase="Running", Reason="", readiness=true. Elapsed: 6.047489702s Oct 14 15:13:27.753: INFO: Pod "pod-subpath-test-secret-zjmb": Phase="Running", Reason="", readiness=true. Elapsed: 8.053681281s Oct 14 15:13:29.761: INFO: Pod "pod-subpath-test-secret-zjmb": Phase="Running", Reason="", readiness=true. Elapsed: 10.061076458s Oct 14 15:13:31.768: INFO: Pod "pod-subpath-test-secret-zjmb": Phase="Running", Reason="", readiness=true. Elapsed: 12.068466041s Oct 14 15:13:33.776: INFO: Pod "pod-subpath-test-secret-zjmb": Phase="Running", Reason="", readiness=true. Elapsed: 14.076524246s Oct 14 15:13:35.785: INFO: Pod "pod-subpath-test-secret-zjmb": Phase="Running", Reason="", readiness=true. Elapsed: 16.085221162s Oct 14 15:13:37.793: INFO: Pod "pod-subpath-test-secret-zjmb": Phase="Running", Reason="", readiness=true. Elapsed: 18.09314777s Oct 14 15:13:39.799: INFO: Pod "pod-subpath-test-secret-zjmb": Phase="Running", Reason="", readiness=true. Elapsed: 20.100045632s Oct 14 15:13:41.806: INFO: Pod "pod-subpath-test-secret-zjmb": Phase="Running", Reason="", readiness=true. Elapsed: 22.106910553s Oct 14 15:13:43.812: INFO: Pod "pod-subpath-test-secret-zjmb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.113025391s STEP: Saw pod success Oct 14 15:13:43.813: INFO: Pod "pod-subpath-test-secret-zjmb" satisfied condition "Succeeded or Failed" Oct 14 15:13:43.818: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-secret-zjmb container test-container-subpath-secret-zjmb: STEP: delete the pod Oct 14 15:13:44.094: INFO: Waiting for pod pod-subpath-test-secret-zjmb to disappear Oct 14 15:13:44.098: INFO: Pod pod-subpath-test-secret-zjmb no longer exists STEP: Deleting pod pod-subpath-test-secret-zjmb Oct 14 15:13:44.098: INFO: Deleting pod "pod-subpath-test-secret-zjmb" in namespace "subpath-7126" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:13:44.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7126" for this suite. • [SLOW TEST:24.634 seconds] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":303,"completed":247,"skipped":4057,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:13:44.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-3742 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-3742 I1014 15:13:44.586653 11 runners.go:190] Created replication controller with name: externalname-service, namespace: services-3742, replica count: 2 I1014 15:13:47.638213 11 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1014 15:13:50.639148 11 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 14 15:13:50.639: INFO: Creating new exec pod Oct 14 15:13:55.682: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-3742 execpoddd5qv -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Oct 14 15:13:57.183: INFO: stderr: "I1014 15:13:57.055229 3642 log.go:181] (0x2c3e000) (0x2c3e070) Create stream\nI1014 15:13:57.058200 3642 log.go:181] (0x2c3e000) (0x2c3e070) Stream added, broadcasting: 1\nI1014 15:13:57.068332 3642 log.go:181] (0x2c3e000) Reply frame received for 1\nI1014 15:13:57.069696 3642 log.go:181] (0x2c3e000) (0x2dae070) Create stream\nI1014 15:13:57.069894 3642 log.go:181] (0x2c3e000) (0x2dae070) Stream added, broadcasting: 3\nI1014 15:13:57.071733 3642 log.go:181] (0x2c3e000) Reply frame received for 3\nI1014 15:13:57.071973 3642 log.go:181] (0x2c3e000) (0x3098070) Create stream\nI1014 15:13:57.072058 3642 log.go:181] (0x2c3e000) (0x3098070) Stream added, broadcasting: 5\nI1014 15:13:57.073565 3642 log.go:181] (0x2c3e000) Reply frame received for 5\nI1014 15:13:57.163526 3642 log.go:181] (0x2c3e000) Data frame received for 5\nI1014 15:13:57.163931 3642 log.go:181] (0x2c3e000) Data frame received for 3\nI1014 15:13:57.164156 3642 log.go:181] (0x2dae070) (3) Data frame handling\nI1014 15:13:57.164388 3642 log.go:181] (0x3098070) (5) Data frame handling\nI1014 15:13:57.165558 3642 log.go:181] (0x2c3e000) Data frame received for 1\nI1014 15:13:57.165690 3642 log.go:181] (0x2c3e070) (1) Data frame handling\nI1014 15:13:57.166143 3642 log.go:181] (0x2c3e070) (1) Data frame sent\nI1014 15:13:57.166592 3642 log.go:181] (0x3098070) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI1014 15:13:57.167756 3642 log.go:181] (0x2c3e000) Data frame received for 5\nI1014 15:13:57.167927 3642 log.go:181] (0x3098070) (5) Data frame handling\nI1014 15:13:57.168127 3642 log.go:181] (0x3098070) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI1014 15:13:57.168274 3642 log.go:181] (0x2c3e000) Data frame received for 5\nI1014 15:13:57.168424 3642 log.go:181] (0x3098070) (5) Data frame handling\nI1014 15:13:57.170668 3642 log.go:181] (0x2c3e000) (0x2c3e070) Stream removed, broadcasting: 1\nI1014 15:13:57.171167 3642 log.go:181] (0x2c3e000) Go away received\nI1014 15:13:57.174305 3642 log.go:181] (0x2c3e000) (0x2c3e070) Stream removed, broadcasting: 1\nI1014 15:13:57.174544 3642 log.go:181] (0x2c3e000) (0x2dae070) Stream removed, broadcasting: 3\nI1014 15:13:57.174755 3642 log.go:181] (0x2c3e000) (0x3098070) Stream removed, broadcasting: 5\n" Oct 14 15:13:57.184: INFO: stdout: "" Oct 14 15:13:57.191: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-3742 execpoddd5qv -- /bin/sh -x -c nc -zv -t -w 2 10.110.34.255 80' Oct 14 15:13:58.773: INFO: stderr: "I1014 15:13:58.646654 3662 log.go:181] (0x2a60000) (0x2a60070) Create stream\nI1014 15:13:58.649604 3662 log.go:181] (0x2a60000) (0x2a60070) Stream added, broadcasting: 1\nI1014 15:13:58.658300 3662 log.go:181] (0x2a60000) Reply frame received for 1\nI1014 15:13:58.658732 3662 log.go:181] (0x2a60000) (0x2512770) Create stream\nI1014 15:13:58.658809 3662 log.go:181] (0x2a60000) (0x2512770) Stream added, broadcasting: 3\nI1014 15:13:58.660155 3662 log.go:181] (0x2a60000) Reply frame received for 3\nI1014 15:13:58.660380 3662 log.go:181] (0x2a60000) (0x2594070) Create stream\nI1014 15:13:58.660436 3662 log.go:181] (0x2a60000) (0x2594070) Stream added, broadcasting: 5\nI1014 15:13:58.661833 3662 log.go:181] (0x2a60000) Reply frame received for 5\nI1014 15:13:58.755725 3662 log.go:181] (0x2a60000) Data frame received for 3\nI1014 15:13:58.756070 3662 log.go:181] (0x2a60000) Data frame received for 5\nI1014 15:13:58.756244 3662 log.go:181] (0x2594070) (5) Data frame handling\nI1014 15:13:58.756342 3662 log.go:181] (0x2512770) (3) Data frame handling\nI1014 15:13:58.756691 3662 log.go:181] (0x2a60000) Data frame received for 1\nI1014 15:13:58.756992 3662 log.go:181] (0x2a60070) (1) Data frame handling\nI1014 15:13:58.757424 3662 log.go:181] (0x2a60070) (1) Data frame sent\nI1014 15:13:58.757556 3662 log.go:181] (0x2594070) (5) Data frame sent\nI1014 15:13:58.758391 3662 log.go:181] (0x2a60000) Data frame received for 5\nI1014 15:13:58.758489 3662 log.go:181] (0x2594070) (5) Data frame handling\n+ nc -zv -t -w 2 10.110.34.255 80\nConnection to 10.110.34.255 80 port [tcp/http] succeeded!\nI1014 15:13:58.759551 3662 log.go:181] (0x2a60000) (0x2a60070) Stream removed, broadcasting: 1\nI1014 15:13:58.761632 3662 log.go:181] (0x2a60000) Go away received\nI1014 15:13:58.764259 3662 log.go:181] (0x2a60000) (0x2a60070) Stream removed, broadcasting: 1\nI1014 15:13:58.764481 3662 log.go:181] (0x2a60000) (0x2512770) Stream removed, broadcasting: 3\nI1014 15:13:58.764715 3662 log.go:181] (0x2a60000) (0x2594070) Stream removed, broadcasting: 5\n" Oct 14 15:13:58.775: INFO: stdout: "" Oct 14 15:13:58.775: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:13:58.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3742" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:14.753 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":303,"completed":248,"skipped":4084,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Events /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:13:58.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-api-machinery] Events /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:13:59.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-872" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":303,"completed":249,"skipped":4117,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:13:59.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:13:59.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3067" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":303,"completed":250,"skipped":4120,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:13:59.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-7122 Oct 14 15:14:03.378: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-7122 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Oct 14 15:14:04.975: INFO: stderr: "I1014 15:14:04.846428 3682 log.go:181] (0x2ac6000) (0x2ac6070) Create stream\nI1014 15:14:04.848162 3682 log.go:181] (0x2ac6000) (0x2ac6070) Stream added, broadcasting: 1\nI1014 15:14:04.867118 3682 log.go:181] (0x2ac6000) Reply frame received for 1\nI1014 15:14:04.867668 3682 log.go:181] (0x2ac6000) (0x2ac61c0) Create stream\nI1014 15:14:04.867735 3682 log.go:181] (0x2ac6000) (0x2ac61c0) Stream added, broadcasting: 3\nI1014 15:14:04.869008 3682 log.go:181] (0x2ac6000) Reply frame received for 3\nI1014 15:14:04.869262 3682 log.go:181] (0x2ac6000) (0x2e18070) Create stream\nI1014 15:14:04.869327 3682 log.go:181] (0x2ac6000) (0x2e18070) Stream added, broadcasting: 5\nI1014 15:14:04.870317 3682 log.go:181] (0x2ac6000) Reply frame received for 5\nI1014 15:14:04.931323 3682 log.go:181] (0x2ac6000) Data frame received for 5\nI1014 15:14:04.931614 3682 log.go:181] (0x2e18070) (5) Data frame handling\nI1014 15:14:04.932232 3682 log.go:181] (0x2e18070) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI1014 15:14:04.954500 3682 log.go:181] (0x2ac6000) Data frame received for 3\nI1014 15:14:04.954622 3682 log.go:181] (0x2ac61c0) (3) Data frame handling\nI1014 15:14:04.954802 3682 log.go:181] (0x2ac61c0) (3) Data frame sent\nI1014 15:14:04.955921 3682 log.go:181] (0x2ac6000) Data frame received for 3\nI1014 15:14:04.956086 3682 log.go:181] (0x2ac61c0) (3) Data frame handling\nI1014 15:14:04.956484 3682 log.go:181] (0x2ac6000) Data frame received for 5\nI1014 15:14:04.956691 3682 log.go:181] (0x2e18070) (5) Data frame handling\nI1014 15:14:04.957963 3682 log.go:181] (0x2ac6000) Data frame received for 1\nI1014 15:14:04.958055 3682 log.go:181] (0x2ac6070) (1) Data frame handling\nI1014 15:14:04.958161 3682 log.go:181] (0x2ac6070) (1) Data frame sent\nI1014 15:14:04.958486 3682 log.go:181] (0x2ac6000) (0x2ac6070) Stream removed, broadcasting: 1\nI1014 15:14:04.962483 3682 log.go:181] (0x2ac6000) Go away received\nI1014 15:14:04.964375 3682 log.go:181] (0x2ac6000) (0x2ac6070) Stream removed, broadcasting: 1\nI1014 15:14:04.965128 3682 log.go:181] (0x2ac6000) (0x2ac61c0) Stream removed, broadcasting: 3\nI1014 15:14:04.965667 3682 log.go:181] (0x2ac6000) (0x2e18070) Stream removed, broadcasting: 5\n" Oct 14 15:14:04.976: INFO: stdout: "iptables" Oct 14 15:14:04.976: INFO: proxyMode: iptables Oct 14 15:14:05.003: INFO: Waiting for pod kube-proxy-mode-detector to disappear Oct 14 15:14:05.218: INFO: Pod kube-proxy-mode-detector still exists Oct 14 15:14:07.219: INFO: Waiting for pod kube-proxy-mode-detector to disappear Oct 14 15:14:07.356: INFO: Pod kube-proxy-mode-detector still exists Oct 14 15:14:09.219: INFO: Waiting for pod kube-proxy-mode-detector to disappear Oct 14 15:14:09.226: INFO: Pod kube-proxy-mode-detector still exists Oct 14 15:14:11.219: INFO: Waiting for pod kube-proxy-mode-detector to disappear Oct 14 15:14:11.225: INFO: Pod kube-proxy-mode-detector still exists Oct 14 15:14:13.219: INFO: Waiting for pod kube-proxy-mode-detector to disappear Oct 14 15:14:13.239: INFO: Pod kube-proxy-mode-detector still exists Oct 14 15:14:15.219: INFO: Waiting for pod kube-proxy-mode-detector to disappear Oct 14 15:14:15.232: INFO: Pod kube-proxy-mode-detector still exists Oct 14 15:14:17.219: INFO: Waiting for pod kube-proxy-mode-detector to disappear Oct 14 15:14:17.226: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-7122 STEP: creating replication controller affinity-nodeport-timeout in namespace services-7122 I1014 15:14:17.334382 11 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-7122, replica count: 3 I1014 15:14:20.386240 11 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1014 15:14:23.386860 11 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 14 15:14:23.408: INFO: Creating new exec pod Oct 14 15:14:28.444: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-7122 execpod-affinity67df2 -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80' Oct 14 15:14:29.932: INFO: stderr: "I1014 15:14:29.815503 3703 log.go:181] (0x2bef420) (0x2bef490) Create stream\nI1014 15:14:29.819128 3703 log.go:181] (0x2bef420) (0x2bef490) Stream added, broadcasting: 1\nI1014 15:14:29.832647 3703 log.go:181] (0x2bef420) Reply frame received for 1\nI1014 15:14:29.833135 3703 log.go:181] (0x2bef420) (0x247d110) Create stream\nI1014 15:14:29.833195 3703 log.go:181] (0x2bef420) (0x247d110) Stream added, broadcasting: 3\nI1014 15:14:29.834524 3703 log.go:181] (0x2bef420) Reply frame received for 3\nI1014 15:14:29.834868 3703 log.go:181] (0x2bef420) (0x2666070) Create stream\nI1014 15:14:29.834960 3703 log.go:181] (0x2bef420) (0x2666070) Stream added, broadcasting: 5\nI1014 15:14:29.836206 3703 log.go:181] (0x2bef420) Reply frame received for 5\nI1014 15:14:29.915118 3703 log.go:181] (0x2bef420) Data frame received for 5\nI1014 15:14:29.915541 3703 log.go:181] (0x2666070) (5) Data frame handling\nI1014 15:14:29.915755 3703 log.go:181] (0x2bef420) Data frame received for 1\nI1014 15:14:29.915908 3703 log.go:181] (0x2bef490) (1) Data frame handling\nI1014 15:14:29.916071 3703 log.go:181] (0x2bef420) Data frame received for 3\nI1014 15:14:29.916188 3703 log.go:181] (0x247d110) (3) Data frame handling\nI1014 15:14:29.916502 3703 log.go:181] (0x2bef490) (1) Data frame sent\nI1014 15:14:29.916819 3703 log.go:181] (0x2666070) (5) Data frame sent\nI1014 15:14:29.917316 3703 log.go:181] (0x2bef420) Data frame received for 5\nI1014 15:14:29.917396 3703 log.go:181] (0x2666070) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\nI1014 15:14:29.918155 3703 log.go:181] (0x2bef420) (0x2bef490) Stream removed, broadcasting: 1\nI1014 15:14:29.920934 3703 log.go:181] (0x2bef420) Go away received\nI1014 15:14:29.923058 3703 log.go:181] (0x2bef420) (0x2bef490) Stream removed, broadcasting: 1\nI1014 15:14:29.923353 3703 log.go:181] (0x2bef420) (0x247d110) Stream removed, broadcasting: 3\nI1014 15:14:29.923504 3703 log.go:181] (0x2bef420) (0x2666070) Stream removed, broadcasting: 5\n" Oct 14 15:14:29.933: INFO: stdout: "" Oct 14 15:14:29.938: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-7122 execpod-affinity67df2 -- /bin/sh -x -c nc -zv -t -w 2 10.97.173.46 80' Oct 14 15:14:31.426: INFO: stderr: "I1014 15:14:31.293504 3723 log.go:181] (0x311e000) (0x311e070) Create stream\nI1014 15:14:31.296618 3723 log.go:181] (0x311e000) (0x311e070) Stream added, broadcasting: 1\nI1014 15:14:31.306599 3723 log.go:181] (0x311e000) Reply frame received for 1\nI1014 15:14:31.307628 3723 log.go:181] (0x311e000) (0x27e0070) Create stream\nI1014 15:14:31.307738 3723 log.go:181] (0x311e000) (0x27e0070) Stream added, broadcasting: 3\nI1014 15:14:31.310147 3723 log.go:181] (0x311e000) Reply frame received for 3\nI1014 15:14:31.310578 3723 log.go:181] (0x311e000) (0x26563f0) Create stream\nI1014 15:14:31.310680 3723 log.go:181] (0x311e000) (0x26563f0) Stream added, broadcasting: 5\nI1014 15:14:31.312292 3723 log.go:181] (0x311e000) Reply frame received for 5\nI1014 15:14:31.406929 3723 log.go:181] (0x311e000) Data frame received for 5\nI1014 15:14:31.407227 3723 log.go:181] (0x311e000) Data frame received for 1\nI1014 15:14:31.407531 3723 log.go:181] (0x26563f0) (5) Data frame handling\nI1014 15:14:31.407699 3723 log.go:181] (0x311e070) (1) Data frame handling\nI1014 15:14:31.407944 3723 log.go:181] (0x311e000) Data frame received for 3\nI1014 15:14:31.408096 3723 log.go:181] (0x27e0070) (3) Data frame handling\nI1014 15:14:31.409485 3723 log.go:181] (0x311e070) (1) Data frame sent\n+ nc -zv -t -w 2 10.97.173.46 80\nConnection to 10.97.173.46 80 port [tcp/http] succeeded!\nI1014 15:14:31.411324 3723 log.go:181] (0x311e000) (0x311e070) Stream removed, broadcasting: 1\nI1014 15:14:31.411922 3723 log.go:181] (0x26563f0) (5) Data frame sent\nI1014 15:14:31.412139 3723 log.go:181] (0x311e000) Data frame received for 5\nI1014 15:14:31.412304 3723 log.go:181] (0x26563f0) (5) Data frame handling\nI1014 15:14:31.414284 3723 log.go:181] (0x311e000) Go away received\nI1014 15:14:31.417539 3723 log.go:181] (0x311e000) (0x311e070) Stream removed, broadcasting: 1\nI1014 15:14:31.417981 3723 log.go:181] (0x311e000) (0x27e0070) Stream removed, broadcasting: 3\nI1014 15:14:31.418194 3723 log.go:181] (0x311e000) (0x26563f0) Stream removed, broadcasting: 5\n" Oct 14 15:14:31.428: INFO: stdout: "" Oct 14 15:14:31.428: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-7122 execpod-affinity67df2 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.15 30901' Oct 14 15:14:32.949: INFO: stderr: "I1014 15:14:32.819231 3743 log.go:181] (0x3116000) (0x3116070) Create stream\nI1014 15:14:32.821314 3743 log.go:181] (0x3116000) (0x3116070) Stream added, broadcasting: 1\nI1014 15:14:32.833101 3743 log.go:181] (0x3116000) Reply frame received for 1\nI1014 15:14:32.834163 3743 log.go:181] (0x3116000) (0x2e1a070) Create stream\nI1014 15:14:32.834297 3743 log.go:181] (0x3116000) (0x2e1a070) Stream added, broadcasting: 3\nI1014 15:14:32.836248 3743 log.go:181] (0x3116000) Reply frame received for 3\nI1014 15:14:32.836584 3743 log.go:181] (0x3116000) (0x28c4070) Create stream\nI1014 15:14:32.836670 3743 log.go:181] (0x3116000) (0x28c4070) Stream added, broadcasting: 5\nI1014 15:14:32.838396 3743 log.go:181] (0x3116000) Reply frame received for 5\nI1014 15:14:32.930148 3743 log.go:181] (0x3116000) Data frame received for 3\nI1014 15:14:32.930433 3743 log.go:181] (0x2e1a070) (3) Data frame handling\nI1014 15:14:32.930694 3743 log.go:181] (0x3116000) Data frame received for 5\nI1014 15:14:32.930905 3743 log.go:181] (0x28c4070) (5) Data frame handling\nI1014 15:14:32.931173 3743 log.go:181] (0x3116000) Data frame received for 1\nI1014 15:14:32.931316 3743 log.go:181] (0x3116070) (1) Data frame handling\nI1014 15:14:32.932104 3743 log.go:181] (0x28c4070) (5) Data frame sent\nI1014 15:14:32.932265 3743 log.go:181] (0x3116070) (1) Data frame sent\n+ nc -zv -t -w 2 172.18.0.15 30901\nConnection to 172.18.0.15 30901 port [tcp/30901] succeeded!\nI1014 15:14:32.932683 3743 log.go:181] (0x3116000) Data frame received for 5\nI1014 15:14:32.932824 3743 log.go:181] (0x28c4070) (5) Data frame handling\nI1014 15:14:32.933455 3743 log.go:181] (0x3116000) (0x3116070) Stream removed, broadcasting: 1\nI1014 15:14:32.936770 3743 log.go:181] (0x3116000) Go away received\nI1014 15:14:32.939197 3743 log.go:181] (0x3116000) (0x3116070) Stream removed, broadcasting: 1\nI1014 15:14:32.939747 3743 log.go:181] (0x3116000) (0x2e1a070) Stream removed, broadcasting: 3\nI1014 15:14:32.939999 3743 log.go:181] (0x3116000) (0x28c4070) Stream removed, broadcasting: 5\n" Oct 14 15:14:32.950: INFO: stdout: "" Oct 14 15:14:32.950: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-7122 execpod-affinity67df2 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 30901' Oct 14 15:14:34.496: INFO: stderr: "I1014 15:14:34.381338 3763 log.go:181] (0x2a34000) (0x2a34070) Create stream\nI1014 15:14:34.384450 3763 log.go:181] (0x2a34000) (0x2a34070) Stream added, broadcasting: 1\nI1014 15:14:34.395271 3763 log.go:181] (0x2a34000) Reply frame received for 1\nI1014 15:14:34.395730 3763 log.go:181] (0x2a34000) (0x2dac070) Create stream\nI1014 15:14:34.395790 3763 log.go:181] (0x2a34000) (0x2dac070) Stream added, broadcasting: 3\nI1014 15:14:34.397202 3763 log.go:181] (0x2a34000) Reply frame received for 3\nI1014 15:14:34.397414 3763 log.go:181] (0x2a34000) (0x2a34310) Create stream\nI1014 15:14:34.397471 3763 log.go:181] (0x2a34000) (0x2a34310) Stream added, broadcasting: 5\nI1014 15:14:34.398571 3763 log.go:181] (0x2a34000) Reply frame received for 5\nI1014 15:14:34.476936 3763 log.go:181] (0x2a34000) Data frame received for 3\nI1014 15:14:34.477432 3763 log.go:181] (0x2a34000) Data frame received for 5\nI1014 15:14:34.477665 3763 log.go:181] (0x2a34310) (5) Data frame handling\nI1014 15:14:34.478034 3763 log.go:181] (0x2dac070) (3) Data frame handling\nI1014 15:14:34.478526 3763 log.go:181] (0x2a34000) Data frame received for 1\nI1014 15:14:34.478720 3763 log.go:181] (0x2a34070) (1) Data frame handling\nI1014 15:14:34.479503 3763 log.go:181] (0x2a34310) (5) Data frame sent\nI1014 15:14:34.479823 3763 log.go:181] (0x2a34000) Data frame received for 5\nI1014 15:14:34.479956 3763 log.go:181] (0x2a34310) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.14 30901\nConnection to 172.18.0.14 30901 port [tcp/30901] succeeded!\nI1014 15:14:34.481276 3763 log.go:181] (0x2a34070) (1) Data frame sent\nI1014 15:14:34.482570 3763 log.go:181] (0x2a34000) (0x2a34070) Stream removed, broadcasting: 1\nI1014 15:14:34.484582 3763 log.go:181] (0x2a34000) Go away received\nI1014 15:14:34.487853 3763 log.go:181] (0x2a34000) (0x2a34070) Stream removed, broadcasting: 1\nI1014 15:14:34.488087 3763 log.go:181] (0x2a34000) (0x2dac070) Stream removed, broadcasting: 3\nI1014 15:14:34.488324 3763 log.go:181] (0x2a34000) (0x2a34310) Stream removed, broadcasting: 5\n" Oct 14 15:14:34.497: INFO: stdout: "" Oct 14 15:14:34.497: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-7122 execpod-affinity67df2 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.15:30901/ ; done' Oct 14 15:14:36.091: INFO: stderr: "I1014 15:14:35.877362 3783 log.go:181] (0x2f08070) (0x2f080e0) Create stream\nI1014 15:14:35.879116 3783 log.go:181] (0x2f08070) (0x2f080e0) Stream added, broadcasting: 1\nI1014 15:14:35.889624 3783 log.go:181] (0x2f08070) Reply frame received for 1\nI1014 15:14:35.890230 3783 log.go:181] (0x2f08070) (0x2f08230) Create stream\nI1014 15:14:35.890314 3783 log.go:181] (0x2f08070) (0x2f08230) Stream added, broadcasting: 3\nI1014 15:14:35.891976 3783 log.go:181] (0x2f08070) Reply frame received for 3\nI1014 15:14:35.892421 3783 log.go:181] (0x2f08070) (0x2f083f0) Create stream\nI1014 15:14:35.892551 3783 log.go:181] (0x2f08070) (0x2f083f0) Stream added, broadcasting: 5\nI1014 15:14:35.894495 3783 log.go:181] (0x2f08070) Reply frame received for 5\nI1014 15:14:35.987018 3783 log.go:181] (0x2f08070) Data frame received for 3\nI1014 15:14:35.987263 3783 log.go:181] (0x2f08230) (3) Data frame handling\nI1014 15:14:35.987454 3783 log.go:181] (0x2f08070) Data frame received for 5\nI1014 15:14:35.987597 3783 log.go:181] (0x2f083f0) (5) Data frame handling\nI1014 15:14:35.987671 3783 log.go:181] (0x2f08230) (3) Data frame sent\nI1014 15:14:35.987979 3783 log.go:181] (0x2f083f0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:30901/\nI1014 15:14:35.991780 3783 log.go:181] (0x2f08070) Data frame received for 3\nI1014 15:14:35.992012 3783 log.go:181] (0x2f08230) (3) Data frame handling\nI1014 15:14:35.992162 3783 log.go:181] (0x2f08070) Data frame received for 5\nI1014 15:14:35.992262 3783 log.go:181] (0x2f083f0) (5) Data frame handling\nI1014 15:14:35.992347 3783 log.go:181] (0x2f083f0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:30901/\nI1014 15:14:35.992428 3783 log.go:181] (0x2f08230) (3) Data frame sent\nI1014 15:14:35.992510 3783 log.go:181] (0x2f08070) Data frame received for 3\nI1014 15:14:35.992608 3783 log.go:181] (0x2f08230) (3) Data frame handling\nI1014 15:14:35.992710 3783 log.go:181] (0x2f08230) (3) Data frame sent\nI1014 15:14:35.998291 3783 log.go:181] (0x2f08070) Data frame received for 5\nI1014 15:14:35.998433 3783 log.go:181] (0x2f083f0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:30901/\nI1014 15:14:35.998601 3783 log.go:181] (0x2f08070) Data frame received for 3\nI1014 15:14:35.999039 3783 log.go:181] (0x2f08230) (3) Data frame handling\nI1014 15:14:35.999243 3783 log.go:181] (0x2f083f0) (5) Data frame sent\nI1014 15:14:35.999431 3783 log.go:181] (0x2f08230) (3) Data frame sent\nI1014 15:14:35.999591 3783 log.go:181] (0x2f08070) Data frame received for 3\nI1014 15:14:35.999720 3783 log.go:181] (0x2f08230) (3) Data frame handling\nI1014 15:14:35.999859 3783 log.go:181] (0x2f08230) (3) Data frame sent\nI1014 15:14:36.001335 3783 log.go:181] (0x2f08070) Data frame received for 3\nI1014 15:14:36.001522 3783 log.go:181] (0x2f08230) (3) Data frame handling\nI1014 15:14:36.001755 3783 log.go:181] (0x2f08230) (3) Data frame sent\nI1014 15:14:36.001952 3783 log.go:181] (0x2f08070) Data frame received for 3\nI1014 15:14:36.002078 3783 log.go:181] (0x2f08070) Data frame received for 5\nI1014 15:14:36.002225 3783 log.go:181] (0x2f083f0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:30901/\nI1014 15:14:36.002348 3783 log.go:181] (0x2f08230) (3) Data frame handling\nI1014 15:14:36.002478 3783 log.go:181] (0x2f08230) (3) Data frame sent\nI1014 15:14:36.002585 3783 log.go:181] (0x2f083f0) (5) Data frame sent\nI1014 15:14:36.005246 3783 log.go:181] (0x2f08070) Data frame received for 3\nI1014 15:14:36.005424 3783 log.go:181] (0x2f08230) (3) Data frame handling\nI1014 15:14:36.005572 3783 log.go:181] (0x2f08230) (3) Data frame sent\nI1014 15:14:36.006091 3783 log.go:181] (0x2f08070) Data frame received for 3\nI1014 15:14:36.006286 3783 log.go:181] (0x2f08230) (3) Data frame handling\nI1014 15:14:36.006398 3783 log.go:181] (0x2f08070) Data frame received for 5\nI1014 15:14:36.006531 3783 log.go:181] (0x2f083f0) (5) Data frame handling\nI1014 15:14:36.006642 3783 log.go:181] (0x2f083f0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:30901/\nI1014 15:14:36.006753 3783 log.go:181] (0x2f08230) (3) Data frame sent\nI1014 15:14:36.010357 3783 log.go:181] (0x2f08070) Data frame received for 3\nI1014 15:14:36.010465 3783 log.go:181] (0x2f08230) (3) Data frame handling\nI1014 15:14:36.010589 3783 log.go:181] (0x2f08230) (3) Data frame sent\nI1014 15:14:36.011211 3783 log.go:181] (0x2f08070) Data frame received for 5\nI1014 15:14:36.011312 3783 log.go:181] (0x2f083f0) (5) Data frame handling\nI1014 15:14:36.011397 3783 log.go:181] (0x2f083f0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:30901/\nI1014 15:14:36.011472 3783 log.go:181] (0x2f08070) Data frame received for 3\nI1014 15:14:36.011543 3783 log.go:181] (0x2f08230) (3) Data frame handling\nI1014 15:14:36.011639 3783 log.go:181] (0x2f08230) (3) Data frame sent\nI1014 15:14:36.016791 3783 log.go:181] (0x2f08070) Data frame received for 3\nI1014 15:14:36.016956 3783 log.go:181] (0x2f08230) (3) Data frame handling\nI1014 15:14:36.017053 3783 log.go:181] (0x2f08230) (3) Data frame sent\nI1014 15:14:36.017715 3783 log.go:181] (0x2f08070) Data frame received for 3\nI1014 15:14:36.017856 3783 log.go:181] (0x2f08230) (3) Data frame handling\nI1014 15:14:36.017948 3783 log.go:181] (0x2f08070) Data frame received for 5\nI1014 15:14:36.018044 3783 log.go:181] (0x2f083f0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:30901/\nI1014 15:14:36.018149 3783 log.go:181] (0x2f08230) (3) Data frame sent\nI1014 15:14:36.018325 3783 log.go:181] (0x2f083f0) (5) Data frame sent\nI1014 15:14:36.023189 3783 log.go:181] (0x2f08070) Data frame received for 3\nI1014 15:14:36.023318 3783 log.go:181] (0x2f08230) (3) Data frame handling\nI1014 15:14:36.023449 3783 log.go:181] (0x2f08230) (3) Data frame sent\nI1014 15:14:36.023770 3783 log.go:181] (0x2f08070) Data frame received for 5\nI1014 15:14:36.023871 3783 log.go:181] (0x2f083f0) (5) Data frame handling\nI1014 15:14:36.023965 3783 log.go:181] (0x2f083f0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:30901/\nI1014 15:14:36.024054 3783 log.go:181] (0x2f08070) Data frame received for 3\nI1014 15:14:36.024236 3783 log.go:181] (0x2f08230) (3) Data frame handling\nI1014 15:14:36.024339 3783 log.go:181] (0x2f08230) (3) Data frame sent\nI1014 15:14:36.027614 3783 log.go:181] (0x2f08070) Data frame received for 3\nI1014 15:14:36.027738 3783 log.go:181] (0x2f08230) (3) Data frame handling\nI1014 15:14:36.027876 3783 log.go:181] (0x2f08230) (3) Data frame sent\nI1014 15:14:36.028163 3783 log.go:181] (0x2f08070) Data frame received for 5\nI1014 15:14:36.028284 3783 log.go:181] (0x2f083f0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:30901/\nI1014 15:14:36.028428 3783 log.go:181] (0x2f08070) Data frame received for 3\nI1014 15:14:36.028615 3783 log.go:181] (0x2f08230) (3) Data frame handling\nI1014 15:14:36.028744 3783 log.go:181] (0x2f083f0) (5) Data frame sent\nI1014 15:14:36.028931 3783 log.go:181] (0x2f08230) (3) Data frame sent\nI1014 15:14:36.034052 3783 log.go:181] (0x2f08070) Data frame received for 3\nI1014 15:14:36.034146 3783 log.go:181] (0x2f08230) (3) Data frame handling\nI1014 15:14:36.034244 3783 log.go:181] (0x2f08230) (3) Data frame sent\nI1014 15:14:36.035016 3783 log.go:181] (0x2f08070) Data frame received for 5\nI1014 15:14:36.035110 3783 log.go:181] (0x2f083f0) (5) Data frame handling\nI1014 15:14:36.035194 3783 log.go:181] (0x2f083f0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:30901/\nI1014 15:14:36.035270 3783 log.go:181] (0x2f08070) Data frame received for 3\nI1014 15:14:36.035341 3783 log.go:181] (0x2f08230) (3) Data frame handling\nI1014 15:14:36.035431 3783 log.go:181] (0x2f08230) (3) Data frame sent\nI1014 15:14:36.039436 3783 log.go:181] (0x2f08070) Data frame received for 3\nI1014 15:14:36.039581 3783 log.go:181] (0x2f08230) (3) Data frame handling\nI1014 15:14:36.039734 3783 log.go:181] (0x2f08230) (3) Data frame sent\nI1014 15:14:36.039929 3783 log.go:181] (0x2f08070) Data frame received for 5\nI1014 15:14:36.040005 3783 log.go:181] (0x2f083f0) (5) Data frame handling\nI1014 15:14:36.040071 3783 log.go:181] (0x2f083f0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:30901/I1014 15:14:36.040137 3783 log.go:181] (0x2f08070) Data frame received for 5\nI1014 15:14:36.040417 3783 log.go:181] (0x2f083f0) (5) Data frame handling\nI1014 15:14:36.040500 3783 log.go:181] (0x2f083f0) (5) Data frame sent\n\nI1014 15:14:36.040561 3783 log.go:181] (0x2f08070) Data frame received for 3\nI1014 15:14:36.040742 3783 log.go:181] (0x2f08230) (3) Data frame handling\nI1014 15:14:36.041025 3783 log.go:181] (0x2f08230) (3) Data frame sent\nI1014 15:14:36.046305 3783 log.go:181] (0x2f08070) Data frame received for 3\nI1014 15:14:36.046384 3783 log.go:181] (0x2f08230) (3) Data frame handling\nI1014 15:14:36.046474 3783 log.go:181] (0x2f08230) (3) Data frame sent\nI1014 15:14:36.046898 3783 log.go:181] (0x2f08070) Data frame received for 3\nI1014 15:14:36.046991 3783 log.go:181] (0x2f08230) (3) Data frame handling\nI1014 15:14:36.047066 3783 log.go:181] (0x2f08230) (3) Data frame sent\nI1014 15:14:36.047136 3783 log.go:181] (0x2f08070) Data frame received for 5\nI1014 15:14:36.047208 3783 log.go:181] (0x2f083f0) (5) Data frame handling\nI1014 15:14:36.047282 3783 log.go:181] (0x2f083f0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:30901/\nI1014 15:14:36.051957 3783 log.go:181] (0x2f08070) Data frame received for 3\nI1014 15:14:36.052040 3783 log.go:181] (0x2f08230) (3) Data frame handling\nI1014 15:14:36.052137 3783 log.go:181] (0x2f08230) (3) Data frame sent\nI1014 15:14:36.052505 3783 log.go:181] (0x2f08070) Data frame received for 5\nI1014 15:14:36.052577 3783 log.go:181] (0x2f083f0) (5) Data frame handling\nI1014 15:14:36.052649 3783 log.go:181] (0x2f083f0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:30901/\nI1014 15:14:36.052958 3783 log.go:181] (0x2f08070) Data frame received for 3\nI1014 15:14:36.053105 3783 log.go:181] (0x2f08230) (3) Data frame handling\nI1014 15:14:36.053247 3783 log.go:181] (0x2f08230) (3) Data frame sent\nI1014 15:14:36.056595 3783 log.go:181] (0x2f08070) Data frame received for 3\nI1014 15:14:36.056680 3783 log.go:181] (0x2f08230) (3) Data frame handling\nI1014 15:14:36.056769 3783 log.go:181] (0x2f08230) (3) Data frame sent\nI1014 15:14:36.057236 3783 log.go:181] (0x2f08070) Data frame received for 5\nI1014 15:14:36.057310 3783 log.go:181] (0x2f083f0) (5) Data frame handling\nI1014 15:14:36.057385 3783 log.go:181] (0x2f083f0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2I1014 15:14:36.057463 3783 log.go:181] (0x2f08070) Data frame received for 5\nI1014 15:14:36.057529 3783 log.go:181] (0x2f083f0) (5) Data frame handling\nI1014 15:14:36.057609 3783 log.go:181] (0x2f083f0) (5) Data frame sent\n http://172.18.0.15:30901/\nI1014 15:14:36.057681 3783 log.go:181] (0x2f08070) Data frame received for 3\nI1014 15:14:36.057748 3783 log.go:181] (0x2f08230) (3) Data frame handling\nI1014 15:14:36.057830 3783 log.go:181] (0x2f08230) (3) Data frame sent\nI1014 15:14:36.061662 3783 log.go:181] (0x2f08070) Data frame received for 3\nI1014 15:14:36.061760 3783 log.go:181] (0x2f08230) (3) Data frame handling\nI1014 15:14:36.061832 3783 log.go:181] (0x2f08230) (3) Data frame sent\nI1014 15:14:36.062249 3783 log.go:181] (0x2f08070) Data frame received for 5\nI1014 15:14:36.062413 3783 log.go:181] (0x2f083f0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:30901/\nI1014 15:14:36.062562 3783 log.go:181] (0x2f08070) Data frame received for 3\nI1014 15:14:36.062723 3783 log.go:181] (0x2f08230) (3) Data frame handling\nI1014 15:14:36.062870 3783 log.go:181] (0x2f083f0) (5) Data frame sent\nI1014 15:14:36.062988 3783 log.go:181] (0x2f08230) (3) Data frame sent\nI1014 15:14:36.066924 3783 log.go:181] (0x2f08070) Data frame received for 3\nI1014 15:14:36.067075 3783 log.go:181] (0x2f08230) (3) Data frame handling\nI1014 15:14:36.067287 3783 log.go:181] (0x2f08230) (3) Data frame sent\nI1014 15:14:36.067645 3783 log.go:181] (0x2f08070) Data frame received for 5\nI1014 15:14:36.067789 3783 log.go:181] (0x2f083f0) (5) Data frame handling\nI1014 15:14:36.067889 3783 log.go:181] (0x2f08070) Data frame received for 3\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:30901/\nI1014 15:14:36.068009 3783 log.go:181] (0x2f08230) (3) Data frame handling\nI1014 15:14:36.068084 3783 log.go:181] (0x2f08230) (3) Data frame sent\nI1014 15:14:36.068151 3783 log.go:181] (0x2f083f0) (5) Data frame sent\nI1014 15:14:36.073848 3783 log.go:181] (0x2f08070) Data frame received for 3\nI1014 15:14:36.074007 3783 log.go:181] (0x2f08230) (3) Data frame handling\nI1014 15:14:36.074147 3783 log.go:181] (0x2f08230) (3) Data frame sent\nI1014 15:14:36.074595 3783 log.go:181] (0x2f08070) Data frame received for 5\nI1014 15:14:36.074720 3783 log.go:181] (0x2f083f0) (5) Data frame handling\nI1014 15:14:36.075457 3783 log.go:181] (0x2f08070) Data frame received for 3\nI1014 15:14:36.075591 3783 log.go:181] (0x2f08230) (3) Data frame handling\nI1014 15:14:36.077970 3783 log.go:181] (0x2f08070) Data frame received for 1\nI1014 15:14:36.078049 3783 log.go:181] (0x2f080e0) (1) Data frame handling\nI1014 15:14:36.078180 3783 log.go:181] (0x2f080e0) (1) Data frame sent\nI1014 15:14:36.079387 3783 log.go:181] (0x2f08070) (0x2f080e0) Stream removed, broadcasting: 1\nI1014 15:14:36.081968 3783 log.go:181] (0x2f08070) Go away received\nI1014 15:14:36.083274 3783 log.go:181] (0x2f08070) (0x2f080e0) Stream removed, broadcasting: 1\nI1014 15:14:36.083596 3783 log.go:181] (0x2f08070) (0x2f08230) Stream removed, broadcasting: 3\nI1014 15:14:36.083786 3783 log.go:181] (0x2f08070) (0x2f083f0) Stream removed, broadcasting: 5\n" Oct 14 15:14:36.095: INFO: stdout: "\naffinity-nodeport-timeout-m5cpf\naffinity-nodeport-timeout-m5cpf\naffinity-nodeport-timeout-m5cpf\naffinity-nodeport-timeout-m5cpf\naffinity-nodeport-timeout-m5cpf\naffinity-nodeport-timeout-m5cpf\naffinity-nodeport-timeout-m5cpf\naffinity-nodeport-timeout-m5cpf\naffinity-nodeport-timeout-m5cpf\naffinity-nodeport-timeout-m5cpf\naffinity-nodeport-timeout-m5cpf\naffinity-nodeport-timeout-m5cpf\naffinity-nodeport-timeout-m5cpf\naffinity-nodeport-timeout-m5cpf\naffinity-nodeport-timeout-m5cpf\naffinity-nodeport-timeout-m5cpf" Oct 14 15:14:36.095: INFO: Received response from host: affinity-nodeport-timeout-m5cpf Oct 14 15:14:36.095: INFO: Received response from host: affinity-nodeport-timeout-m5cpf Oct 14 15:14:36.095: INFO: Received response from host: affinity-nodeport-timeout-m5cpf Oct 14 15:14:36.095: INFO: Received response from host: affinity-nodeport-timeout-m5cpf Oct 14 15:14:36.095: INFO: Received response from host: affinity-nodeport-timeout-m5cpf Oct 14 15:14:36.095: INFO: Received response from host: affinity-nodeport-timeout-m5cpf Oct 14 15:14:36.095: INFO: Received response from host: affinity-nodeport-timeout-m5cpf Oct 14 15:14:36.095: INFO: Received response from host: affinity-nodeport-timeout-m5cpf Oct 14 15:14:36.095: INFO: Received response from host: affinity-nodeport-timeout-m5cpf Oct 14 15:14:36.095: INFO: Received response from host: affinity-nodeport-timeout-m5cpf Oct 14 15:14:36.095: INFO: Received response from host: affinity-nodeport-timeout-m5cpf Oct 14 15:14:36.096: INFO: Received response from host: affinity-nodeport-timeout-m5cpf Oct 14 15:14:36.096: INFO: Received response from host: affinity-nodeport-timeout-m5cpf Oct 14 15:14:36.096: INFO: Received response from host: affinity-nodeport-timeout-m5cpf Oct 14 15:14:36.096: INFO: Received response from host: affinity-nodeport-timeout-m5cpf Oct 14 15:14:36.096: INFO: Received response from host: affinity-nodeport-timeout-m5cpf Oct 14 15:14:36.096: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-7122 execpod-affinity67df2 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.15:30901/' Oct 14 15:14:37.576: INFO: stderr: "I1014 15:14:37.436689 3803 log.go:181] (0x28620e0) (0x28621c0) Create stream\nI1014 15:14:37.439256 3803 log.go:181] (0x28620e0) (0x28621c0) Stream added, broadcasting: 1\nI1014 15:14:37.447887 3803 log.go:181] (0x28620e0) Reply frame received for 1\nI1014 15:14:37.448313 3803 log.go:181] (0x28620e0) (0x2862540) Create stream\nI1014 15:14:37.448367 3803 log.go:181] (0x28620e0) (0x2862540) Stream added, broadcasting: 3\nI1014 15:14:37.449706 3803 log.go:181] (0x28620e0) Reply frame received for 3\nI1014 15:14:37.450079 3803 log.go:181] (0x28620e0) (0x293e070) Create stream\nI1014 15:14:37.450168 3803 log.go:181] (0x28620e0) (0x293e070) Stream added, broadcasting: 5\nI1014 15:14:37.451571 3803 log.go:181] (0x28620e0) Reply frame received for 5\nI1014 15:14:37.553549 3803 log.go:181] (0x28620e0) Data frame received for 5\nI1014 15:14:37.553752 3803 log.go:181] (0x293e070) (5) Data frame handling\nI1014 15:14:37.554062 3803 log.go:181] (0x293e070) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:30901/\nI1014 15:14:37.558304 3803 log.go:181] (0x28620e0) Data frame received for 3\nI1014 15:14:37.558394 3803 log.go:181] (0x2862540) (3) Data frame handling\nI1014 15:14:37.558531 3803 log.go:181] (0x28620e0) Data frame received for 5\nI1014 15:14:37.558695 3803 log.go:181] (0x293e070) (5) Data frame handling\nI1014 15:14:37.558790 3803 log.go:181] (0x2862540) (3) Data frame sent\nI1014 15:14:37.558909 3803 log.go:181] (0x28620e0) Data frame received for 3\nI1014 15:14:37.559023 3803 log.go:181] (0x2862540) (3) Data frame handling\nI1014 15:14:37.560083 3803 log.go:181] (0x28620e0) Data frame received for 1\nI1014 15:14:37.560240 3803 log.go:181] (0x28621c0) (1) Data frame handling\nI1014 15:14:37.560413 3803 log.go:181] (0x28621c0) (1) Data frame sent\nI1014 15:14:37.562671 3803 log.go:181] (0x28620e0) (0x28621c0) Stream removed, broadcasting: 1\nI1014 15:14:37.563006 3803 log.go:181] (0x28620e0) Go away received\nI1014 15:14:37.565872 3803 log.go:181] (0x28620e0) (0x28621c0) Stream removed, broadcasting: 1\nI1014 15:14:37.566033 3803 log.go:181] (0x28620e0) (0x2862540) Stream removed, broadcasting: 3\nI1014 15:14:37.566155 3803 log.go:181] (0x28620e0) (0x293e070) Stream removed, broadcasting: 5\n" Oct 14 15:14:37.577: INFO: stdout: "affinity-nodeport-timeout-m5cpf" Oct 14 15:14:52.578: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config exec --namespace=services-7122 execpod-affinity67df2 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.15:30901/' Oct 14 15:14:54.145: INFO: stderr: "I1014 15:14:54.001438 3823 log.go:181] (0x258e000) (0x258e070) Create stream\nI1014 15:14:54.003481 3823 log.go:181] (0x258e000) (0x258e070) Stream added, broadcasting: 1\nI1014 15:14:54.011608 3823 log.go:181] (0x258e000) Reply frame received for 1\nI1014 15:14:54.012147 3823 log.go:181] (0x258e000) (0x2da2070) Create stream\nI1014 15:14:54.012229 3823 log.go:181] (0x258e000) (0x2da2070) Stream added, broadcasting: 3\nI1014 15:14:54.013646 3823 log.go:181] (0x258e000) Reply frame received for 3\nI1014 15:14:54.013880 3823 log.go:181] (0x258e000) (0x258e230) Create stream\nI1014 15:14:54.013944 3823 log.go:181] (0x258e000) (0x258e230) Stream added, broadcasting: 5\nI1014 15:14:54.015027 3823 log.go:181] (0x258e000) Reply frame received for 5\nI1014 15:14:54.116103 3823 log.go:181] (0x258e000) Data frame received for 5\nI1014 15:14:54.116498 3823 log.go:181] (0x258e230) (5) Data frame handling\nI1014 15:14:54.117320 3823 log.go:181] (0x258e230) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.15:30901/\nI1014 15:14:54.118390 3823 log.go:181] (0x258e000) Data frame received for 3\nI1014 15:14:54.118523 3823 log.go:181] (0x2da2070) (3) Data frame handling\nI1014 15:14:54.118648 3823 log.go:181] (0x2da2070) (3) Data frame sent\nI1014 15:14:54.119447 3823 log.go:181] (0x258e000) Data frame received for 3\nI1014 15:14:54.119537 3823 log.go:181] (0x2da2070) (3) Data frame handling\nI1014 15:14:54.119652 3823 log.go:181] (0x258e000) Data frame received for 5\nI1014 15:14:54.119800 3823 log.go:181] (0x258e230) (5) Data frame handling\nI1014 15:14:54.121674 3823 log.go:181] (0x258e000) Data frame received for 1\nI1014 15:14:54.121772 3823 log.go:181] (0x258e070) (1) Data frame handling\nI1014 15:14:54.121876 3823 log.go:181] (0x258e070) (1) Data frame sent\nI1014 15:14:54.122304 3823 log.go:181] (0x258e000) (0x258e070) Stream removed, broadcasting: 1\nI1014 15:14:54.124366 3823 log.go:181] (0x258e000) Go away received\nI1014 15:14:54.133852 3823 log.go:181] (0x258e000) (0x258e070) Stream removed, broadcasting: 1\nI1014 15:14:54.135670 3823 log.go:181] (0x258e000) (0x2da2070) Stream removed, broadcasting: 3\nI1014 15:14:54.136420 3823 log.go:181] (0x258e000) (0x258e230) Stream removed, broadcasting: 5\n" Oct 14 15:14:54.146: INFO: stdout: "affinity-nodeport-timeout-grhgd" Oct 14 15:14:54.146: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-7122, will wait for the garbage collector to delete the pods Oct 14 15:14:54.280: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 7.972139ms Oct 14 15:14:54.881: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 600.73686ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:15:05.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7122" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:66.592 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":303,"completed":251,"skipped":4131,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:15:05.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override arguments Oct 14 15:15:05.937: INFO: Waiting up to 5m0s for pod "client-containers-a9fb2abe-4bb8-445b-afac-471efcf2a623" in namespace "containers-3637" to be "Succeeded or Failed" Oct 14 15:15:05.963: INFO: Pod "client-containers-a9fb2abe-4bb8-445b-afac-471efcf2a623": Phase="Pending", Reason="", readiness=false. Elapsed: 26.276768ms Oct 14 15:15:08.096: INFO: Pod "client-containers-a9fb2abe-4bb8-445b-afac-471efcf2a623": Phase="Pending", Reason="", readiness=false. Elapsed: 2.158670681s Oct 14 15:15:10.102: INFO: Pod "client-containers-a9fb2abe-4bb8-445b-afac-471efcf2a623": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.165162944s STEP: Saw pod success Oct 14 15:15:10.102: INFO: Pod "client-containers-a9fb2abe-4bb8-445b-afac-471efcf2a623" satisfied condition "Succeeded or Failed" Oct 14 15:15:10.106: INFO: Trying to get logs from node latest-worker pod client-containers-a9fb2abe-4bb8-445b-afac-471efcf2a623 container test-container: STEP: delete the pod Oct 14 15:15:10.168: INFO: Waiting for pod client-containers-a9fb2abe-4bb8-445b-afac-471efcf2a623 to disappear Oct 14 15:15:10.260: INFO: Pod client-containers-a9fb2abe-4bb8-445b-afac-471efcf2a623 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:15:10.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3637" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":303,"completed":252,"skipped":4139,"failed":0} SSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:15:10.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Oct 14 15:15:14.565: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:15:14.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8974" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":303,"completed":253,"skipped":4145,"failed":0} ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:15:14.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Oct 14 15:15:14.789: INFO: Waiting up to 1m0s for all nodes to be ready Oct 14 15:16:14.860: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create pods that use 2/3 of node resources. Oct 14 15:16:14.899: INFO: Created pod: pod0-sched-preemption-low-priority Oct 14 15:16:15.024: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a high priority pod that has same requirements as that of lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:16:29.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-3412" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:74.617 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates basic preemption works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":303,"completed":254,"skipped":4145,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:16:29.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 15:16:29.396: INFO: Waiting up to 5m0s for pod "busybox-user-65534-bce9582a-2d99-4d9e-a41d-36260aedbe06" in namespace "security-context-test-5027" to be "Succeeded or Failed" Oct 14 15:16:29.402: INFO: Pod "busybox-user-65534-bce9582a-2d99-4d9e-a41d-36260aedbe06": Phase="Pending", Reason="", readiness=false. Elapsed: 5.767627ms Oct 14 15:16:31.441: INFO: Pod "busybox-user-65534-bce9582a-2d99-4d9e-a41d-36260aedbe06": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045134522s Oct 14 15:16:33.478: INFO: Pod "busybox-user-65534-bce9582a-2d99-4d9e-a41d-36260aedbe06": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.081706309s Oct 14 15:16:33.478: INFO: Pod "busybox-user-65534-bce9582a-2d99-4d9e-a41d-36260aedbe06" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:16:33.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5027" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":255,"skipped":4185,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:16:33.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 14 15:16:33.577: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6d2f9351-8ce6-4936-9e07-28ad1042541e" in namespace "projected-6184" to be "Succeeded or Failed" Oct 14 15:16:33.625: INFO: Pod "downwardapi-volume-6d2f9351-8ce6-4936-9e07-28ad1042541e": Phase="Pending", Reason="", readiness=false. Elapsed: 47.88461ms Oct 14 15:16:35.778: INFO: Pod "downwardapi-volume-6d2f9351-8ce6-4936-9e07-28ad1042541e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20138899s Oct 14 15:16:37.787: INFO: Pod "downwardapi-volume-6d2f9351-8ce6-4936-9e07-28ad1042541e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.21005702s Oct 14 15:16:39.794: INFO: Pod "downwardapi-volume-6d2f9351-8ce6-4936-9e07-28ad1042541e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.217013366s STEP: Saw pod success Oct 14 15:16:39.794: INFO: Pod "downwardapi-volume-6d2f9351-8ce6-4936-9e07-28ad1042541e" satisfied condition "Succeeded or Failed" Oct 14 15:16:39.800: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-6d2f9351-8ce6-4936-9e07-28ad1042541e container client-container: STEP: delete the pod Oct 14 15:16:39.881: INFO: Waiting for pod downwardapi-volume-6d2f9351-8ce6-4936-9e07-28ad1042541e to disappear Oct 14 15:16:39.886: INFO: Pod downwardapi-volume-6d2f9351-8ce6-4936-9e07-28ad1042541e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:16:39.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6184" for this suite. • [SLOW TEST:6.404 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":256,"skipped":4187,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:16:39.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-3392bc35-b12d-4b88-93e4-c0350e61e959 STEP: Creating secret with name s-test-opt-upd-fa083ed1-8296-487c-92ba-b120f6bf49bc STEP: Creating the pod STEP: Deleting secret s-test-opt-del-3392bc35-b12d-4b88-93e4-c0350e61e959 STEP: Updating secret s-test-opt-upd-fa083ed1-8296-487c-92ba-b120f6bf49bc STEP: Creating secret with name s-test-opt-create-5d67aea2-c7be-49d9-8ce2-4b77906adfb1 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:18:16.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3882" for this suite. • [SLOW TEST:96.900 seconds] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":257,"skipped":4212,"failed":0} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:18:16.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:18:28.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2268" for this suite. • [SLOW TEST:11.350 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":303,"completed":258,"skipped":4216,"failed":0} SSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:18:28.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 14 15:18:28.227: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 14 15:18:28.243: INFO: Waiting for terminating namespaces to be deleted... Oct 14 15:18:28.248: INFO: Logging pods the apiserver thinks is on node latest-worker before test Oct 14 15:18:28.256: INFO: kindnet-jwscz from kube-system started at 2020-10-10 08:58:57 +0000 UTC (1 container statuses recorded) Oct 14 15:18:28.256: INFO: Container kindnet-cni ready: true, restart count 0 Oct 14 15:18:28.256: INFO: kube-proxy-cg6dw from kube-system started at 2020-10-10 08:58:56 +0000 UTC (1 container statuses recorded) Oct 14 15:18:28.256: INFO: Container kube-proxy ready: true, restart count 0 Oct 14 15:18:28.256: INFO: pod-projected-secrets-abf22abd-2f5a-47b8-9ec0-caedc9581f14 from projected-3882 started at 2020-10-14 15:16:40 +0000 UTC (3 container statuses recorded) Oct 14 15:18:28.257: INFO: Container creates-volume-test ready: false, restart count 0 Oct 14 15:18:28.257: INFO: Container dels-volume-test ready: false, restart count 0 Oct 14 15:18:28.257: INFO: Container upds-volume-test ready: false, restart count 0 Oct 14 15:18:28.257: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Oct 14 15:18:28.284: INFO: coredns-f9fd979d6-l8q79 from kube-system started at 2020-10-10 08:59:26 +0000 UTC (1 container statuses recorded) Oct 14 15:18:28.285: INFO: Container coredns ready: true, restart count 0 Oct 14 15:18:28.285: INFO: coredns-f9fd979d6-rhzs8 from kube-system started at 2020-10-10 08:59:16 +0000 UTC (1 container statuses recorded) Oct 14 15:18:28.285: INFO: Container coredns ready: true, restart count 0 Oct 14 15:18:28.285: INFO: kindnet-g7vp5 from kube-system started at 2020-10-10 08:58:57 +0000 UTC (1 container statuses recorded) Oct 14 15:18:28.285: INFO: Container kindnet-cni ready: true, restart count 0 Oct 14 15:18:28.285: INFO: kube-proxy-bmxmj from kube-system started at 2020-10-10 08:58:56 +0000 UTC (1 container statuses recorded) Oct 14 15:18:28.285: INFO: Container kube-proxy ready: true, restart count 0 Oct 14 15:18:28.285: INFO: local-path-provisioner-78776bfc44-6tlk5 from local-path-storage started at 2020-10-10 08:59:16 +0000 UTC (1 container statuses recorded) Oct 14 15:18:28.285: INFO: Container local-path-provisioner ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.163de4e38ac06b49], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:18:29.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6057" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":303,"completed":259,"skipped":4220,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:18:29.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-0c20d229-d33d-4673-9d3a-6554605d647c in namespace container-probe-704 Oct 14 15:18:33.481: INFO: Started pod busybox-0c20d229-d33d-4673-9d3a-6554605d647c in namespace container-probe-704 STEP: checking the pod's current state and verifying that restartCount is present Oct 14 15:18:33.486: INFO: Initial restart count of pod busybox-0c20d229-d33d-4673-9d3a-6554605d647c is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:22:34.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-704" for this suite. • [SLOW TEST:245.223 seconds] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":303,"completed":260,"skipped":4232,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:22:34.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Oct 14 15:22:39.070: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:22:39.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7569" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":303,"completed":261,"skipped":4245,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:22:39.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Oct 14 15:22:39.283: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:22:46.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2314" for this suite. • [SLOW TEST:7.541 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":303,"completed":262,"skipped":4279,"failed":0} SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:22:46.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-be691caa-b15c-40e8-852e-ae421c83eb29 STEP: Creating a pod to test consume secrets Oct 14 15:22:47.284: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-086c994c-5a24-44a4-bbf7-0b69ee2cce3b" in namespace "projected-3987" to be "Succeeded or Failed" Oct 14 15:22:47.425: INFO: Pod "pod-projected-secrets-086c994c-5a24-44a4-bbf7-0b69ee2cce3b": Phase="Pending", Reason="", readiness=false. Elapsed: 140.45557ms Oct 14 15:22:49.449: INFO: Pod "pod-projected-secrets-086c994c-5a24-44a4-bbf7-0b69ee2cce3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.164371629s Oct 14 15:22:51.455: INFO: Pod "pod-projected-secrets-086c994c-5a24-44a4-bbf7-0b69ee2cce3b": Phase="Running", Reason="", readiness=true. Elapsed: 4.170808468s Oct 14 15:22:53.464: INFO: Pod "pod-projected-secrets-086c994c-5a24-44a4-bbf7-0b69ee2cce3b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.179122929s STEP: Saw pod success Oct 14 15:22:53.464: INFO: Pod "pod-projected-secrets-086c994c-5a24-44a4-bbf7-0b69ee2cce3b" satisfied condition "Succeeded or Failed" Oct 14 15:22:53.468: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-086c994c-5a24-44a4-bbf7-0b69ee2cce3b container projected-secret-volume-test: STEP: delete the pod Oct 14 15:22:53.527: INFO: Waiting for pod pod-projected-secrets-086c994c-5a24-44a4-bbf7-0b69ee2cce3b to disappear Oct 14 15:22:53.540: INFO: Pod pod-projected-secrets-086c994c-5a24-44a4-bbf7-0b69ee2cce3b no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:22:53.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3987" for this suite. • [SLOW TEST:6.894 seconds] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":263,"skipped":4281,"failed":0} S ------------------------------ [sig-network] Ingress API should support creating Ingress API operations [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Ingress API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:22:53.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingress STEP: Waiting for a default service account to be provisioned in namespace [It] should support creating Ingress API operations [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Oct 14 15:22:53.708: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Oct 14 15:22:53.717: INFO: starting watch STEP: patching STEP: updating Oct 14 15:22:53.741: INFO: waiting for watch events with expected annotations Oct 14 15:22:53.742: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: get /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] Ingress API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:22:53.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingress-3972" for this suite. •{"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":303,"completed":264,"skipped":4282,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:22:53.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-5543591c-e77b-4fc1-b291-a0561408971f STEP: Creating a pod to test consume configMaps Oct 14 15:22:53.967: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e9247396-fe32-486a-b64f-a51c6cbbdd0a" in namespace "projected-5262" to be "Succeeded or Failed" Oct 14 15:22:53.987: INFO: Pod "pod-projected-configmaps-e9247396-fe32-486a-b64f-a51c6cbbdd0a": Phase="Pending", Reason="", readiness=false. Elapsed: 19.357782ms Oct 14 15:22:56.026: INFO: Pod "pod-projected-configmaps-e9247396-fe32-486a-b64f-a51c6cbbdd0a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05876605s Oct 14 15:22:58.085: INFO: Pod "pod-projected-configmaps-e9247396-fe32-486a-b64f-a51c6cbbdd0a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.116828051s STEP: Saw pod success Oct 14 15:22:58.085: INFO: Pod "pod-projected-configmaps-e9247396-fe32-486a-b64f-a51c6cbbdd0a" satisfied condition "Succeeded or Failed" Oct 14 15:22:58.095: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-e9247396-fe32-486a-b64f-a51c6cbbdd0a container projected-configmap-volume-test: STEP: delete the pod Oct 14 15:22:58.205: INFO: Waiting for pod pod-projected-configmaps-e9247396-fe32-486a-b64f-a51c6cbbdd0a to disappear Oct 14 15:22:58.226: INFO: Pod pod-projected-configmaps-e9247396-fe32-486a-b64f-a51c6cbbdd0a no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:22:58.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5262" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":265,"skipped":4295,"failed":0} SSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:22:58.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:23:09.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5300" for this suite. • [SLOW TEST:11.225 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":303,"completed":266,"skipped":4300,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:23:09.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:23:16.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2237" for this suite. • [SLOW TEST:7.148 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":303,"completed":267,"skipped":4334,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:23:16.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-24bd29d6-3636-400a-ba27-809634ab0dba in namespace container-probe-5791 Oct 14 15:23:20.767: INFO: Started pod busybox-24bd29d6-3636-400a-ba27-809634ab0dba in namespace container-probe-5791 STEP: checking the pod's current state and verifying that restartCount is present Oct 14 15:23:20.776: INFO: Initial restart count of pod busybox-24bd29d6-3636-400a-ba27-809634ab0dba is 0 Oct 14 15:24:08.970: INFO: Restart count of pod container-probe-5791/busybox-24bd29d6-3636-400a-ba27-809634ab0dba is now 1 (48.194217399s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:24:09.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5791" for this suite. • [SLOW TEST:52.404 seconds] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":303,"completed":268,"skipped":4364,"failed":0} S ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:24:09.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-24b2cf83-f9d2-4c3b-9040-4c6653690683 STEP: Creating a pod to test consume secrets Oct 14 15:24:09.156: INFO: Waiting up to 5m0s for pod "pod-secrets-b49f94d1-c367-4e29-82fc-d961961f719e" in namespace "secrets-6289" to be "Succeeded or Failed" Oct 14 15:24:09.206: INFO: Pod "pod-secrets-b49f94d1-c367-4e29-82fc-d961961f719e": Phase="Pending", Reason="", readiness=false. Elapsed: 49.006747ms Oct 14 15:24:11.213: INFO: Pod "pod-secrets-b49f94d1-c367-4e29-82fc-d961961f719e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056107274s Oct 14 15:24:13.222: INFO: Pod "pod-secrets-b49f94d1-c367-4e29-82fc-d961961f719e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064887539s STEP: Saw pod success Oct 14 15:24:13.222: INFO: Pod "pod-secrets-b49f94d1-c367-4e29-82fc-d961961f719e" satisfied condition "Succeeded or Failed" Oct 14 15:24:13.228: INFO: Trying to get logs from node latest-worker pod pod-secrets-b49f94d1-c367-4e29-82fc-d961961f719e container secret-volume-test: STEP: delete the pod Oct 14 15:24:13.254: INFO: Waiting for pod pod-secrets-b49f94d1-c367-4e29-82fc-d961961f719e to disappear Oct 14 15:24:13.271: INFO: Pod pod-secrets-b49f94d1-c367-4e29-82fc-d961961f719e no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:24:13.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6289" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":303,"completed":269,"skipped":4365,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:24:13.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Oct 14 15:24:17.409: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:24:17.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6540" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":303,"completed":270,"skipped":4374,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:24:17.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 15:24:17.613: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:24:24.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4457" for this suite. • [SLOW TEST:6.634 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":303,"completed":271,"skipped":4386,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:24:24.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Oct 14 15:24:24.209: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:24:30.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3949" for this suite. • [SLOW TEST:6.776 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":303,"completed":272,"skipped":4398,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:24:30.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium Oct 14 15:24:31.004: INFO: Waiting up to 5m0s for pod "pod-2ee1f497-e2e3-4895-ace6-51e14f8633e5" in namespace "emptydir-9952" to be "Succeeded or Failed" Oct 14 15:24:31.021: INFO: Pod "pod-2ee1f497-e2e3-4895-ace6-51e14f8633e5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.845674ms Oct 14 15:24:33.029: INFO: Pod "pod-2ee1f497-e2e3-4895-ace6-51e14f8633e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024802145s Oct 14 15:24:35.037: INFO: Pod "pod-2ee1f497-e2e3-4895-ace6-51e14f8633e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032368636s STEP: Saw pod success Oct 14 15:24:35.037: INFO: Pod "pod-2ee1f497-e2e3-4895-ace6-51e14f8633e5" satisfied condition "Succeeded or Failed" Oct 14 15:24:35.042: INFO: Trying to get logs from node latest-worker pod pod-2ee1f497-e2e3-4895-ace6-51e14f8633e5 container test-container: STEP: delete the pod Oct 14 15:24:35.085: INFO: Waiting for pod pod-2ee1f497-e2e3-4895-ace6-51e14f8633e5 to disappear Oct 14 15:24:35.270: INFO: Pod pod-2ee1f497-e2e3-4895-ace6-51e14f8633e5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:24:35.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9952" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":273,"skipped":4407,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:24:35.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should support proxy with --port 0 [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting the proxy server Oct 14 15:24:35.499: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:24:36.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6273" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":303,"completed":274,"skipped":4409,"failed":0} SS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:24:36.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:24:41.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7752" for this suite. • [SLOW TEST:5.190 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":303,"completed":275,"skipped":4411,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:24:41.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] deployment should delete old replica sets [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 15:24:42.057: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Oct 14 15:24:47.062: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Oct 14 15:24:47.062: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Oct 14 15:24:51.194: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-5713 /apis/apps/v1/namespaces/deployment-5713/deployments/test-cleanup-deployment 3a2aa7a1-75a8-457f-9bb8-7891e260aabd 1158572 1 2020-10-14 15:24:47 +0000 UTC map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2020-10-14 15:24:47 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-10-14 15:24:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x91b3778 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-10-14 15:24:47 +0000 UTC,LastTransitionTime:2020-10-14 15:24:47 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-5d446bdd47" has successfully progressed.,LastUpdateTime:2020-10-14 15:24:50 +0000 UTC,LastTransitionTime:2020-10-14 15:24:47 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Oct 14 15:24:51.201: INFO: New ReplicaSet "test-cleanup-deployment-5d446bdd47" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-5d446bdd47 deployment-5713 /apis/apps/v1/namespaces/deployment-5713/replicasets/test-cleanup-deployment-5d446bdd47 763151e4-9e49-48be-89d7-818dc7bbb05a 1158561 1 2020-10-14 15:24:47 +0000 UTC map[name:cleanup-pod pod-template-hash:5d446bdd47] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 3a2aa7a1-75a8-457f-9bb8-7891e260aabd 0x91b3ea7 0x91b3ea8}] [] [{kube-controller-manager Update apps/v1 2020-10-14 15:24:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3a2aa7a1-75a8-457f-9bb8-7891e260aabd\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 5d446bdd47,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:5d446bdd47] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x91b3f38 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Oct 14 15:24:51.208: INFO: Pod "test-cleanup-deployment-5d446bdd47-jx6vl" is available: &Pod{ObjectMeta:{test-cleanup-deployment-5d446bdd47-jx6vl test-cleanup-deployment-5d446bdd47- deployment-5713 /api/v1/namespaces/deployment-5713/pods/test-cleanup-deployment-5d446bdd47-jx6vl 3afdeae5-6b4d-4e41-bf9c-bdeefc4a18f1 1158560 0 2020-10-14 15:24:47 +0000 UTC map[name:cleanup-pod pod-template-hash:5d446bdd47] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-5d446bdd47 763151e4-9e49-48be-89d7-818dc7bbb05a 0xab162f7 0xab162f8}] [] [{kube-controller-manager Update v1 2020-10-14 15:24:47 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"763151e4-9e49-48be-89d7-818dc7bbb05a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-14 15:24:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.202\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mdndd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mdndd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mdndd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 15:24:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 15:24:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 15:24:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 15:24:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.2.202,StartTime:2020-10-14 15:24:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-14 15:24:49 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://5836567c383feb48c9c2a4c59ce0f0e5a8bb54517ff3e2948594ecd2ab658c92,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.202,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:24:51.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5713" for this suite. • [SLOW TEST:9.249 seconds] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":303,"completed":276,"skipped":4441,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:24:51.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Oct 14 15:24:55.990: INFO: Successfully updated pod "pod-update-activedeadlineseconds-f3f32807-5880-43b8-af8a-5f704b283221" Oct 14 15:24:55.991: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-f3f32807-5880-43b8-af8a-5f704b283221" in namespace "pods-6060" to be "terminated due to deadline exceeded" Oct 14 15:24:55.999: INFO: Pod "pod-update-activedeadlineseconds-f3f32807-5880-43b8-af8a-5f704b283221": Phase="Running", Reason="", readiness=true. Elapsed: 7.923042ms Oct 14 15:24:58.073: INFO: Pod "pod-update-activedeadlineseconds-f3f32807-5880-43b8-af8a-5f704b283221": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.081871367s Oct 14 15:24:58.073: INFO: Pod "pod-update-activedeadlineseconds-f3f32807-5880-43b8-af8a-5f704b283221" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:24:58.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6060" for this suite. • [SLOW TEST:6.866 seconds] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":303,"completed":277,"skipped":4453,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:24:58.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should support --unix-socket=/path [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Starting the proxy Oct 14 15:24:58.237: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix200437481/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:24:59.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1997" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":303,"completed":278,"skipped":4473,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:24:59.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-7296 [It] Should recreate evicted statefulset [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-7296 STEP: Creating statefulset with conflicting port in namespace statefulset-7296 STEP: Waiting until pod test-pod will start running in namespace statefulset-7296 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-7296 Oct 14 15:25:05.509: INFO: Observed stateful pod in namespace: statefulset-7296, name: ss-0, uid: 5b5b9eee-a3a1-473f-abb1-d8dda434c878, status phase: Pending. Waiting for statefulset controller to delete. Oct 14 15:25:06.032: INFO: Observed stateful pod in namespace: statefulset-7296, name: ss-0, uid: 5b5b9eee-a3a1-473f-abb1-d8dda434c878, status phase: Failed. Waiting for statefulset controller to delete. Oct 14 15:25:06.081: INFO: Observed stateful pod in namespace: statefulset-7296, name: ss-0, uid: 5b5b9eee-a3a1-473f-abb1-d8dda434c878, status phase: Failed. Waiting for statefulset controller to delete. Oct 14 15:25:06.134: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-7296 STEP: Removing pod with conflicting port in namespace statefulset-7296 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-7296 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Oct 14 15:25:10.279: INFO: Deleting all statefulset in ns statefulset-7296 Oct 14 15:25:10.285: INFO: Scaling statefulset ss to 0 Oct 14 15:25:30.313: INFO: Waiting for statefulset status.replicas updated to 0 Oct 14 15:25:30.317: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:25:30.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7296" for this suite. • [SLOW TEST:31.126 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Should recreate evicted statefulset [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":303,"completed":279,"skipped":4580,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:25:30.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:26:30.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3087" for this suite. • [SLOW TEST:60.179 seconds] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":303,"completed":280,"skipped":4607,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:26:30.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 14 15:26:36.929: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 14 15:26:39.228: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738285996, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738285996, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738285997, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738285996, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 14 15:26:41.243: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738285996, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738285996, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738285997, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738285996, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 14 15:26:44.273: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:26:44.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4056" for this suite. STEP: Destroying namespace "webhook-4056-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:14.409 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":303,"completed":281,"skipped":4618,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:26:44.969: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:26:51.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8009" for this suite. STEP: Destroying namespace "nsdeletetest-1441" for this suite. Oct 14 15:26:51.421: INFO: Namespace nsdeletetest-1441 was already deleted STEP: Destroying namespace "nsdeletetest-8274" for this suite. • [SLOW TEST:6.458 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":303,"completed":282,"skipped":4652,"failed":0} SSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:26:51.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Oct 14 15:26:51.495: INFO: Waiting up to 5m0s for pod "downward-api-7a2b7bc3-b2fb-44e4-b2db-2df6143a19a9" in namespace "downward-api-1127" to be "Succeeded or Failed" Oct 14 15:26:51.536: INFO: Pod "downward-api-7a2b7bc3-b2fb-44e4-b2db-2df6143a19a9": Phase="Pending", Reason="", readiness=false. Elapsed: 40.040795ms Oct 14 15:26:53.555: INFO: Pod "downward-api-7a2b7bc3-b2fb-44e4-b2db-2df6143a19a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059027814s Oct 14 15:26:55.563: INFO: Pod "downward-api-7a2b7bc3-b2fb-44e4-b2db-2df6143a19a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.067153152s STEP: Saw pod success Oct 14 15:26:55.563: INFO: Pod "downward-api-7a2b7bc3-b2fb-44e4-b2db-2df6143a19a9" satisfied condition "Succeeded or Failed" Oct 14 15:26:55.598: INFO: Trying to get logs from node latest-worker pod downward-api-7a2b7bc3-b2fb-44e4-b2db-2df6143a19a9 container dapi-container: STEP: delete the pod Oct 14 15:26:55.788: INFO: Waiting for pod downward-api-7a2b7bc3-b2fb-44e4-b2db-2df6143a19a9 to disappear Oct 14 15:26:55.834: INFO: Pod downward-api-7a2b7bc3-b2fb-44e4-b2db-2df6143a19a9 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:26:55.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1127" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":303,"completed":283,"skipped":4659,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:26:55.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 14 15:26:55.905: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 14 15:26:55.933: INFO: Waiting for terminating namespaces to be deleted... Oct 14 15:26:55.961: INFO: Logging pods the apiserver thinks is on node latest-worker before test Oct 14 15:26:55.970: INFO: kindnet-jwscz from kube-system started at 2020-10-10 08:58:57 +0000 UTC (1 container statuses recorded) Oct 14 15:26:55.970: INFO: Container kindnet-cni ready: true, restart count 0 Oct 14 15:26:55.970: INFO: kube-proxy-cg6dw from kube-system started at 2020-10-10 08:58:56 +0000 UTC (1 container statuses recorded) Oct 14 15:26:55.970: INFO: Container kube-proxy ready: true, restart count 0 Oct 14 15:26:55.970: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Oct 14 15:26:55.984: INFO: coredns-f9fd979d6-l8q79 from kube-system started at 2020-10-10 08:59:26 +0000 UTC (1 container statuses recorded) Oct 14 15:26:55.984: INFO: Container coredns ready: true, restart count 0 Oct 14 15:26:55.984: INFO: coredns-f9fd979d6-rhzs8 from kube-system started at 2020-10-10 08:59:16 +0000 UTC (1 container statuses recorded) Oct 14 15:26:55.984: INFO: Container coredns ready: true, restart count 0 Oct 14 15:26:55.984: INFO: kindnet-g7vp5 from kube-system started at 2020-10-10 08:58:57 +0000 UTC (1 container statuses recorded) Oct 14 15:26:55.984: INFO: Container kindnet-cni ready: true, restart count 0 Oct 14 15:26:55.985: INFO: kube-proxy-bmxmj from kube-system started at 2020-10-10 08:58:56 +0000 UTC (1 container statuses recorded) Oct 14 15:26:55.985: INFO: Container kube-proxy ready: true, restart count 0 Oct 14 15:26:55.985: INFO: local-path-provisioner-78776bfc44-6tlk5 from local-path-storage started at 2020-10-10 08:59:16 +0000 UTC (1 container statuses recorded) Oct 14 15:26:55.985: INFO: Container local-path-provisioner ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-5890c7eb-a6fe-4260-afad-420f167a580e 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-5890c7eb-a6fe-4260-afad-420f167a580e off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-5890c7eb-a6fe-4260-afad-420f167a580e [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:32:04.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8815" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:308.478 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":303,"completed":284,"skipped":4690,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:32:04.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Oct 14 15:32:11.046: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Oct 14 15:32:13.068: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738286331, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738286331, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738286331, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738286330, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-85d57b96d6\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 14 15:32:16.127: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 15:32:16.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:32:17.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-796" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:13.018 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":303,"completed":285,"skipped":4708,"failed":0} SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:32:17.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-3861d34e-67da-4203-8f8d-1f1668d0893a STEP: Creating a pod to test consume configMaps Oct 14 15:32:17.507: INFO: Waiting up to 5m0s for pod "pod-configmaps-296d448c-5043-4b82-92e1-22d2768ec0a0" in namespace "configmap-6688" to be "Succeeded or Failed" Oct 14 15:32:17.525: INFO: Pod "pod-configmaps-296d448c-5043-4b82-92e1-22d2768ec0a0": Phase="Pending", Reason="", readiness=false. Elapsed: 18.076871ms Oct 14 15:32:19.535: INFO: Pod "pod-configmaps-296d448c-5043-4b82-92e1-22d2768ec0a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027729052s Oct 14 15:32:21.545: INFO: Pod "pod-configmaps-296d448c-5043-4b82-92e1-22d2768ec0a0": Phase="Running", Reason="", readiness=true. Elapsed: 4.037953283s Oct 14 15:32:23.560: INFO: Pod "pod-configmaps-296d448c-5043-4b82-92e1-22d2768ec0a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.052648617s STEP: Saw pod success Oct 14 15:32:23.560: INFO: Pod "pod-configmaps-296d448c-5043-4b82-92e1-22d2768ec0a0" satisfied condition "Succeeded or Failed" Oct 14 15:32:23.566: INFO: Trying to get logs from node latest-worker pod pod-configmaps-296d448c-5043-4b82-92e1-22d2768ec0a0 container configmap-volume-test: STEP: delete the pod Oct 14 15:32:23.669: INFO: Waiting for pod pod-configmaps-296d448c-5043-4b82-92e1-22d2768ec0a0 to disappear Oct 14 15:32:23.678: INFO: Pod pod-configmaps-296d448c-5043-4b82-92e1-22d2768ec0a0 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:32:23.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6688" for this suite. • [SLOW TEST:6.339 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":303,"completed":286,"skipped":4715,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:32:23.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 14 15:32:31.591: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 14 15:32:33.742: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738286351, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738286351, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738286351, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738286351, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 14 15:32:36.811: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Oct 14 15:32:40.897: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config attach --namespace=webhook-6957 to-be-attached-pod -i -c=container1' Oct 14 15:32:45.132: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:32:45.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6957" for this suite. STEP: Destroying namespace "webhook-6957-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:21.543 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":303,"completed":287,"skipped":4718,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:32:45.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Update Demo /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:308 [It] should create and stop a replication controller [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller Oct 14 15:32:45.393: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2784' Oct 14 15:32:47.738: INFO: stderr: "" Oct 14 15:32:47.738: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Oct 14 15:32:47.739: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2784' Oct 14 15:32:49.010: INFO: stderr: "" Oct 14 15:32:49.011: INFO: stdout: "update-demo-nautilus-qqdng update-demo-nautilus-tzglq " Oct 14 15:32:49.011: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qqdng -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2784' Oct 14 15:32:50.265: INFO: stderr: "" Oct 14 15:32:50.265: INFO: stdout: "" Oct 14 15:32:50.265: INFO: update-demo-nautilus-qqdng is created but not running Oct 14 15:32:55.266: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2784' Oct 14 15:32:56.563: INFO: stderr: "" Oct 14 15:32:56.564: INFO: stdout: "update-demo-nautilus-qqdng update-demo-nautilus-tzglq " Oct 14 15:32:56.564: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qqdng -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2784' Oct 14 15:32:57.811: INFO: stderr: "" Oct 14 15:32:57.812: INFO: stdout: "true" Oct 14 15:32:57.812: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qqdng -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2784' Oct 14 15:32:59.066: INFO: stderr: "" Oct 14 15:32:59.067: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Oct 14 15:32:59.067: INFO: validating pod update-demo-nautilus-qqdng Oct 14 15:32:59.073: INFO: got data: { "image": "nautilus.jpg" } Oct 14 15:32:59.074: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 14 15:32:59.074: INFO: update-demo-nautilus-qqdng is verified up and running Oct 14 15:32:59.074: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tzglq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2784' Oct 14 15:33:00.288: INFO: stderr: "" Oct 14 15:33:00.288: INFO: stdout: "true" Oct 14 15:33:00.289: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tzglq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2784' Oct 14 15:33:01.501: INFO: stderr: "" Oct 14 15:33:01.501: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Oct 14 15:33:01.501: INFO: validating pod update-demo-nautilus-tzglq Oct 14 15:33:01.508: INFO: got data: { "image": "nautilus.jpg" } Oct 14 15:33:01.509: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 14 15:33:01.509: INFO: update-demo-nautilus-tzglq is verified up and running STEP: using delete to clean up resources Oct 14 15:33:01.509: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2784' Oct 14 15:33:02.752: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 14 15:33:02.752: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Oct 14 15:33:02.753: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2784' Oct 14 15:33:04.014: INFO: stderr: "No resources found in kubectl-2784 namespace.\n" Oct 14 15:33:04.014: INFO: stdout: "" Oct 14 15:33:04.014: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2784 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Oct 14 15:33:05.311: INFO: stderr: "" Oct 14 15:33:05.311: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:33:05.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2784" for this suite. • [SLOW TEST:20.084 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:306 should create and stop a replication controller [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":303,"completed":288,"skipped":4772,"failed":0} SSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:33:05.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 14 15:33:05.432: INFO: Waiting up to 5m0s for pod "downwardapi-volume-990e1ef0-d76e-4a92-ad1c-ee4a0810ccd8" in namespace "downward-api-515" to be "Succeeded or Failed" Oct 14 15:33:05.461: INFO: Pod "downwardapi-volume-990e1ef0-d76e-4a92-ad1c-ee4a0810ccd8": Phase="Pending", Reason="", readiness=false. Elapsed: 28.654914ms Oct 14 15:33:07.470: INFO: Pod "downwardapi-volume-990e1ef0-d76e-4a92-ad1c-ee4a0810ccd8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038276294s Oct 14 15:33:09.479: INFO: Pod "downwardapi-volume-990e1ef0-d76e-4a92-ad1c-ee4a0810ccd8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047090663s STEP: Saw pod success Oct 14 15:33:09.479: INFO: Pod "downwardapi-volume-990e1ef0-d76e-4a92-ad1c-ee4a0810ccd8" satisfied condition "Succeeded or Failed" Oct 14 15:33:09.484: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-990e1ef0-d76e-4a92-ad1c-ee4a0810ccd8 container client-container: STEP: delete the pod Oct 14 15:33:09.532: INFO: Waiting for pod downwardapi-volume-990e1ef0-d76e-4a92-ad1c-ee4a0810ccd8 to disappear Oct 14 15:33:09.561: INFO: Pod downwardapi-volume-990e1ef0-d76e-4a92-ad1c-ee4a0810ccd8 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:33:09.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-515" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":303,"completed":289,"skipped":4778,"failed":0} SSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:33:09.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap that has name configmap-test-emptyKey-3b1f885b-1898-4b69-a620-861eb36d7719 [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:33:09.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7925" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":303,"completed":290,"skipped":4782,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:33:09.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Oct 14 15:33:09.845: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:33:19.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7537" for this suite. • [SLOW TEST:10.244 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":303,"completed":291,"skipped":4794,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:33:19.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-d9808d9a-83d8-4dd1-9271-fc13a45722ff STEP: Creating configMap with name cm-test-opt-upd-26186a7c-cf0e-4256-a1b4-f85fb3a2f6be STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-d9808d9a-83d8-4dd1-9271-fc13a45722ff STEP: Updating configmap cm-test-opt-upd-26186a7c-cf0e-4256-a1b4-f85fb3a2f6be STEP: Creating configMap with name cm-test-opt-create-46e60d99-9028-4f9c-bdfb-99631f427a65 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:34:36.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5826" for this suite. • [SLOW TEST:76.864 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":292,"skipped":4805,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:34:36.813: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 14 15:34:43.202: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 14 15:34:45.224: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738286483, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738286483, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738286483, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738286483, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 14 15:34:47.232: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738286483, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738286483, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738286483, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738286483, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 14 15:34:50.298: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:34:50.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8239" for this suite. STEP: Destroying namespace "webhook-8239-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.683 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":303,"completed":293,"skipped":4839,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:34:50.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 14 15:34:58.552: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 14 15:35:00.645: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738286498, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738286498, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738286498, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738286498, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 14 15:35:02.680: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738286498, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738286498, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738286498, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738286498, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 14 15:35:05.693: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:35:06.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5668" for this suite. STEP: Destroying namespace "webhook-5668-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:16.037 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":303,"completed":294,"skipped":4841,"failed":0} [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:35:06.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 15:37:06.675: INFO: Deleting pod "var-expansion-b8103fe6-780f-4594-bd52-a164d7b53abf" in namespace "var-expansion-1553" Oct 14 15:37:06.684: INFO: Wait up to 5m0s for pod "var-expansion-b8103fe6-780f-4594-bd52-a164d7b53abf" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:37:08.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1553" for this suite. • [SLOW TEST:122.226 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":303,"completed":295,"skipped":4841,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:37:08.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name secret-emptykey-test-9c336483-96e8-440f-8f92-e8c6a298b67b [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:37:08.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7693" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":303,"completed":296,"skipped":4850,"failed":0} ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:37:08.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 15:37:08.942: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7252' Oct 14 15:37:10.889: INFO: stderr: "" Oct 14 15:37:10.889: INFO: stdout: "replicationcontroller/agnhost-primary created\n" Oct 14 15:37:10.889: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7252' Oct 14 15:37:14.202: INFO: stderr: "" Oct 14 15:37:14.203: INFO: stdout: "service/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Oct 14 15:37:15.234: INFO: Selector matched 1 pods for map[app:agnhost] Oct 14 15:37:15.235: INFO: Found 1 / 1 Oct 14 15:37:15.235: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Oct 14 15:37:15.242: INFO: Selector matched 1 pods for map[app:agnhost] Oct 14 15:37:15.243: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Oct 14 15:37:15.243: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config describe pod agnhost-primary-vrc2s --namespace=kubectl-7252' Oct 14 15:37:16.573: INFO: stderr: "" Oct 14 15:37:16.574: INFO: stdout: "Name: agnhost-primary-vrc2s\nNamespace: kubectl-7252\nPriority: 0\nNode: latest-worker/172.18.0.15\nStart Time: Wed, 14 Oct 2020 15:37:10 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: \nStatus: Running\nIP: 10.244.2.222\nIPs:\n IP: 10.244.2.222\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: containerd://4cd965f7d811ba1c55716a9266579dfbce02673da4adb82615c2c53a097db24f\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.20\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Wed, 14 Oct 2020 15:37:13 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-mpxp2 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-mpxp2:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-mpxp2\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 6s default-scheduler Successfully assigned kubectl-7252/agnhost-primary-vrc2s to latest-worker\n Normal Pulled 4s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.20\" already present on machine\n Normal Created 3s kubelet Created container agnhost-primary\n Normal Started 3s kubelet Started container agnhost-primary\n" Oct 14 15:37:16.575: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config describe rc agnhost-primary --namespace=kubectl-7252' Oct 14 15:37:18.062: INFO: stderr: "" Oct 14 15:37:18.062: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-7252\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.20\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 8s replication-controller Created pod: agnhost-primary-vrc2s\n" Oct 14 15:37:18.064: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config describe service agnhost-primary --namespace=kubectl-7252' Oct 14 15:37:19.372: INFO: stderr: "" Oct 14 15:37:19.372: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-7252\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP: 10.97.82.210\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.222:6379\nSession Affinity: None\nEvents: \n" Oct 14 15:37:19.383: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config describe node latest-control-plane' Oct 14 15:37:20.809: INFO: stderr: "" Oct 14 15:37:20.810: INFO: stdout: "Name: latest-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sat, 10 Oct 2020 08:58:25 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Wed, 14 Oct 2020 15:37:19 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Wed, 14 Oct 2020 15:35:19 +0000 Sat, 10 Oct 2020 08:58:23 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Wed, 14 Oct 2020 15:35:19 +0000 Sat, 10 Oct 2020 08:58:23 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Wed, 14 Oct 2020 15:35:19 +0000 Sat, 10 Oct 2020 08:58:23 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Wed, 14 Oct 2020 15:35:19 +0000 Sat, 10 Oct 2020 08:59:37 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.16\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759868Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759868Ki\n pods: 110\nSystem Info:\n Machine ID: caa260c6a3b946279ec1bc906e7a2062\n System UUID: e7cbf5f9-e358-4304-a4ab-c83e6879c290\n Boot ID: b267d78b-f69b-4338-80e8-3f4944338e5d\n Kernel Version: 4.15.0-118-generic\n OS Image: Ubuntu Groovy Gorilla (development branch)\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.4.0\n Kubelet Version: v1.19.0\n Kube-Proxy Version: v1.19.0\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (6 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-latest-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4d6h\n kube-system kindnet-qsltg 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 4d6h\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 4d6h\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 4d6h\n kube-system kube-proxy-vm99r 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4d6h\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 4d6h\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Oct 14 15:37:20.813: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config describe namespace kubectl-7252' Oct 14 15:37:22.116: INFO: stderr: "" Oct 14 15:37:22.116: INFO: stdout: "Name: kubectl-7252\nLabels: e2e-framework=kubectl\n e2e-run=ecaf12d6-4eab-40fd-a91d-dd048d482243\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:37:22.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7252" for this suite. • [SLOW TEST:13.251 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1105 should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":303,"completed":297,"skipped":4850,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:37:22.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 14 15:37:40.773: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 14 15:37:42.794: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738286660, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738286660, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738286660, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738286660, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 14 15:37:45.844: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 15:37:45.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5755-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:37:47.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4711" for this suite. STEP: Destroying namespace "webhook-4711-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:25.122 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":303,"completed":298,"skipped":4870,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:37:47.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl replace /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1581 [It] should update a single-container pod's image [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Oct 14 15:37:47.404: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-2351' Oct 14 15:37:48.659: INFO: stderr: "" Oct 14 15:37:48.660: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Oct 14 15:37:53.713: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-2351 -o json' Oct 14 15:37:55.031: INFO: stderr: "" Oct 14 15:37:55.031: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-10-14T15:37:48Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl-run\",\n \"operation\": \"Update\",\n \"time\": \"2020-10-14T15:37:48Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.244.2.224\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2020-10-14T15:37:52Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-2351\",\n \"resourceVersion\": \"1161976\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-2351/pods/e2e-test-httpd-pod\",\n \"uid\": \"5b7ac334-987e-4a1a-a99b-3a9c244f43d5\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-q9fh6\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-q9fh6\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-q9fh6\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-10-14T15:37:48Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-10-14T15:37:52Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-10-14T15:37:52Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-10-14T15:37:48Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://5d72ff5f9b868115b664426df3084e9c6902f1bc2d1099de38b29872ce499af6\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-10-14T15:37:51Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.15\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.224\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.224\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-10-14T15:37:48Z\"\n }\n}\n" STEP: replace the image in the pod Oct 14 15:37:55.035: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-2351' Oct 14 15:37:57.680: INFO: stderr: "" Oct 14 15:37:57.680: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1586 Oct 14 15:37:57.694: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34323 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-2351' Oct 14 15:38:05.672: INFO: stderr: "" Oct 14 15:38:05.672: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:38:05.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2351" for this suite. • [SLOW TEST:18.434 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1577 should update a single-container pod's image [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":303,"completed":299,"skipped":4876,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:38:05.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 14 15:38:11.761: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 14 15:38:13.890: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738286691, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738286691, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738286691, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738286691, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 14 15:38:15.900: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738286691, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738286691, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738286691, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738286691, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 14 15:38:19.014: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:38:19.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6445" for this suite. STEP: Destroying namespace "webhook-6445-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.586 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":303,"completed":300,"skipped":4900,"failed":0} SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:38:19.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-projected-qh6q STEP: Creating a pod to test atomic-volume-subpath Oct 14 15:38:19.528: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-qh6q" in namespace "subpath-2200" to be "Succeeded or Failed" Oct 14 15:38:19.561: INFO: Pod "pod-subpath-test-projected-qh6q": Phase="Pending", Reason="", readiness=false. Elapsed: 32.56987ms Oct 14 15:38:21.618: INFO: Pod "pod-subpath-test-projected-qh6q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089454663s Oct 14 15:38:23.625: INFO: Pod "pod-subpath-test-projected-qh6q": Phase="Running", Reason="", readiness=true. Elapsed: 4.096087487s Oct 14 15:38:25.633: INFO: Pod "pod-subpath-test-projected-qh6q": Phase="Running", Reason="", readiness=true. Elapsed: 6.104118663s Oct 14 15:38:27.640: INFO: Pod "pod-subpath-test-projected-qh6q": Phase="Running", Reason="", readiness=true. Elapsed: 8.11136337s Oct 14 15:38:29.688: INFO: Pod "pod-subpath-test-projected-qh6q": Phase="Running", Reason="", readiness=true. Elapsed: 10.159609972s Oct 14 15:38:31.698: INFO: Pod "pod-subpath-test-projected-qh6q": Phase="Running", Reason="", readiness=true. Elapsed: 12.169660443s Oct 14 15:38:33.711: INFO: Pod "pod-subpath-test-projected-qh6q": Phase="Running", Reason="", readiness=true. Elapsed: 14.18187981s Oct 14 15:38:35.718: INFO: Pod "pod-subpath-test-projected-qh6q": Phase="Running", Reason="", readiness=true. Elapsed: 16.189010474s Oct 14 15:38:37.725: INFO: Pod "pod-subpath-test-projected-qh6q": Phase="Running", Reason="", readiness=true. Elapsed: 18.195948962s Oct 14 15:38:39.744: INFO: Pod "pod-subpath-test-projected-qh6q": Phase="Running", Reason="", readiness=true. Elapsed: 20.215104571s Oct 14 15:38:41.753: INFO: Pod "pod-subpath-test-projected-qh6q": Phase="Running", Reason="", readiness=true. Elapsed: 22.224139861s Oct 14 15:38:43.769: INFO: Pod "pod-subpath-test-projected-qh6q": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.240161851s STEP: Saw pod success Oct 14 15:38:43.769: INFO: Pod "pod-subpath-test-projected-qh6q" satisfied condition "Succeeded or Failed" Oct 14 15:38:43.775: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-projected-qh6q container test-container-subpath-projected-qh6q: STEP: delete the pod Oct 14 15:38:43.964: INFO: Waiting for pod pod-subpath-test-projected-qh6q to disappear Oct 14 15:38:44.009: INFO: Pod pod-subpath-test-projected-qh6q no longer exists STEP: Deleting pod pod-subpath-test-projected-qh6q Oct 14 15:38:44.009: INFO: Deleting pod "pod-subpath-test-projected-qh6q" in namespace "subpath-2200" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:38:44.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2200" for this suite. • [SLOW TEST:24.785 seconds] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":303,"completed":301,"skipped":4903,"failed":0} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:38:44.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:38:57.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3103" for this suite. • [SLOW TEST:13.246 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":303,"completed":302,"skipped":4907,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 15:38:57.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-aac1eceb-8241-45e5-b685-1dadf6fce5d4 STEP: Creating a pod to test consume configMaps Oct 14 15:38:57.439: INFO: Waiting up to 5m0s for pod "pod-configmaps-654fffd6-25d5-4927-8d6a-608172cfbac9" in namespace "configmap-2034" to be "Succeeded or Failed" Oct 14 15:38:57.461: INFO: Pod "pod-configmaps-654fffd6-25d5-4927-8d6a-608172cfbac9": Phase="Pending", Reason="", readiness=false. Elapsed: 22.00874ms Oct 14 15:38:59.595: INFO: Pod "pod-configmaps-654fffd6-25d5-4927-8d6a-608172cfbac9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.15623059s Oct 14 15:39:01.603: INFO: Pod "pod-configmaps-654fffd6-25d5-4927-8d6a-608172cfbac9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.163630432s STEP: Saw pod success Oct 14 15:39:01.603: INFO: Pod "pod-configmaps-654fffd6-25d5-4927-8d6a-608172cfbac9" satisfied condition "Succeeded or Failed" Oct 14 15:39:01.607: INFO: Trying to get logs from node latest-worker pod pod-configmaps-654fffd6-25d5-4927-8d6a-608172cfbac9 container configmap-volume-test: STEP: delete the pod Oct 14 15:39:01.809: INFO: Waiting for pod pod-configmaps-654fffd6-25d5-4927-8d6a-608172cfbac9 to disappear Oct 14 15:39:01.855: INFO: Pod pod-configmaps-654fffd6-25d5-4927-8d6a-608172cfbac9 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 15:39:01.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2034" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":303,"skipped":4910,"failed":0} SSSSSSSSSSSSSSSSSSSOct 14 15:39:01.938: INFO: Running AfterSuite actions on all nodes Oct 14 15:39:01.939: INFO: Running AfterSuite actions on node 1 Oct 14 15:39:01.939: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":303,"completed":303,"skipped":4929,"failed":0} Ran 303 of 5232 Specs in 7445.855 seconds SUCCESS! -- 303 Passed | 0 Failed | 0 Pending | 4929 Skipped PASS