I0929 10:31:22.394575 7 test_context.go:429] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0929 10:31:22.394751 7 e2e.go:129] Starting e2e run "4c389a24-f053-434f-9b2e-b565abdb321c" on Ginkgo node 1 {"msg":"Test Suite starting","total":303,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1601375481 - Will randomize all specs Will run 303 of 5232 specs Sep 29 10:31:22.452: INFO: >>> kubeConfig: /root/.kube/config Sep 29 10:31:22.456: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Sep 29 10:31:22.478: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Sep 29 10:31:22.517: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Sep 29 10:31:22.517: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Sep 29 10:31:22.517: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Sep 29 10:31:22.528: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Sep 29 10:31:22.528: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Sep 29 10:31:22.528: INFO: e2e test version: v1.19.2 Sep 29 10:31:22.529: INFO: kube-apiserver version: v1.19.0 Sep 29 10:31:22.529: INFO: >>> kubeConfig: /root/.kube/config Sep 29 10:31:22.533: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-instrumentation] Events API should delete a collection of events [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-instrumentation] Events API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:31:22.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events Sep 29 10:31:22.644: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should delete a collection of events [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of events STEP: get a list of Events with a label in the current namespace STEP: delete a list of events Sep 29 10:31:22.794: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity [AfterEach] [sig-instrumentation] Events API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:31:22.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-6326" for this suite. •{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":303,"completed":1,"skipped":56,"failed":0} SSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:31:22.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-9089/configmap-test-1cc2b42a-692b-4aa2-93cd-b8e591e6a58a STEP: Creating a pod to test consume configMaps Sep 29 10:31:23.158: INFO: Waiting up to 5m0s for pod "pod-configmaps-5db8d477-3f9c-4fd4-837e-726f7800bfa5" in namespace "configmap-9089" to be "Succeeded or Failed" Sep 29 10:31:23.170: INFO: Pod "pod-configmaps-5db8d477-3f9c-4fd4-837e-726f7800bfa5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.17411ms Sep 29 10:31:25.175: INFO: Pod "pod-configmaps-5db8d477-3f9c-4fd4-837e-726f7800bfa5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016996405s Sep 29 10:31:27.180: INFO: Pod "pod-configmaps-5db8d477-3f9c-4fd4-837e-726f7800bfa5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021657422s STEP: Saw pod success Sep 29 10:31:27.180: INFO: Pod "pod-configmaps-5db8d477-3f9c-4fd4-837e-726f7800bfa5" satisfied condition "Succeeded or Failed" Sep 29 10:31:27.183: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-5db8d477-3f9c-4fd4-837e-726f7800bfa5 container env-test: STEP: delete the pod Sep 29 10:31:27.266: INFO: Waiting for pod pod-configmaps-5db8d477-3f9c-4fd4-837e-726f7800bfa5 to disappear Sep 29 10:31:27.282: INFO: Pod pod-configmaps-5db8d477-3f9c-4fd4-837e-726f7800bfa5 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:31:27.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9089" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":303,"completed":2,"skipped":62,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:31:27.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:31:31.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-4891" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":303,"completed":3,"skipped":65,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:31:31.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:31:38.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3197" for this suite. • [SLOW TEST:7.147 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":303,"completed":4,"skipped":85,"failed":0} SSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:31:38.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 29 10:33:39.006: INFO: Deleting pod "var-expansion-63252f6d-f060-4e78-adbb-bf82e7018be8" in namespace "var-expansion-2957" Sep 29 10:33:39.012: INFO: Wait up to 5m0s for pod "var-expansion-63252f6d-f060-4e78-adbb-bf82e7018be8" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:33:43.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2957" for this suite. • [SLOW TEST:124.362 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":303,"completed":5,"skipped":88,"failed":0} SS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:33:43.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0929 10:34:23.422061 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Sep 29 10:35:25.442: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Sep 29 10:35:25.442: INFO: Deleting pod "simpletest.rc-2vl2d" in namespace "gc-7298" Sep 29 10:35:25.451: INFO: Deleting pod "simpletest.rc-4gbdg" in namespace "gc-7298" Sep 29 10:35:25.538: INFO: Deleting pod "simpletest.rc-98t4l" in namespace "gc-7298" Sep 29 10:35:25.613: INFO: Deleting pod "simpletest.rc-9twp6" in namespace "gc-7298" Sep 29 10:35:26.043: INFO: Deleting pod "simpletest.rc-c8ptb" in namespace "gc-7298" Sep 29 10:35:26.229: INFO: Deleting pod "simpletest.rc-fjdh2" in namespace "gc-7298" Sep 29 10:35:26.469: INFO: Deleting pod "simpletest.rc-fpcdc" in namespace "gc-7298" Sep 29 10:35:26.708: INFO: Deleting pod "simpletest.rc-gwrd4" in namespace "gc-7298" Sep 29 10:35:27.221: INFO: Deleting pod "simpletest.rc-ls7jm" in namespace "gc-7298" Sep 29 10:35:27.949: INFO: Deleting pod "simpletest.rc-rdjp8" in namespace "gc-7298" [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:35:28.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7298" for this suite. • [SLOW TEST:105.541 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":303,"completed":6,"skipped":90,"failed":0} SS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:35:28.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name projected-secret-test-54e14573-2210-4e18-9f09-11de6f8e380c STEP: Creating a pod to test consume secrets Sep 29 10:35:28.902: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-372cee11-ed1b-4643-a298-b817d8a7bf48" in namespace "projected-6709" to be "Succeeded or Failed" Sep 29 10:35:28.942: INFO: Pod "pod-projected-secrets-372cee11-ed1b-4643-a298-b817d8a7bf48": Phase="Pending", Reason="", readiness=false. Elapsed: 40.39047ms Sep 29 10:35:31.156: INFO: Pod "pod-projected-secrets-372cee11-ed1b-4643-a298-b817d8a7bf48": Phase="Pending", Reason="", readiness=false. Elapsed: 2.254297432s Sep 29 10:35:33.160: INFO: Pod "pod-projected-secrets-372cee11-ed1b-4643-a298-b817d8a7bf48": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.258343796s STEP: Saw pod success Sep 29 10:35:33.160: INFO: Pod "pod-projected-secrets-372cee11-ed1b-4643-a298-b817d8a7bf48" satisfied condition "Succeeded or Failed" Sep 29 10:35:33.163: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-372cee11-ed1b-4643-a298-b817d8a7bf48 container secret-volume-test: STEP: delete the pod Sep 29 10:35:33.351: INFO: Waiting for pod pod-projected-secrets-372cee11-ed1b-4643-a298-b817d8a7bf48 to disappear Sep 29 10:35:33.361: INFO: Pod pod-projected-secrets-372cee11-ed1b-4643-a298-b817d8a7bf48 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:35:33.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6709" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":303,"completed":7,"skipped":92,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:35:33.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8533.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-8533.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8533.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8533.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-8533.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8533.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 29 10:35:39.528: INFO: DNS probes using dns-8533/dns-test-1ce82cad-5dca-4a01-becd-b65283ed6d65 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:35:39.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8533" for this suite. • [SLOW TEST:6.250 seconds] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":303,"completed":8,"skipped":112,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:35:39.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating replication controller my-hostname-basic-20276d67-ed71-41e4-8879-fb17cccb8287 Sep 29 10:35:39.951: INFO: Pod name my-hostname-basic-20276d67-ed71-41e4-8879-fb17cccb8287: Found 0 pods out of 1 Sep 29 10:35:44.954: INFO: Pod name my-hostname-basic-20276d67-ed71-41e4-8879-fb17cccb8287: Found 1 pods out of 1 Sep 29 10:35:44.954: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-20276d67-ed71-41e4-8879-fb17cccb8287" are running Sep 29 10:35:44.957: INFO: Pod "my-hostname-basic-20276d67-ed71-41e4-8879-fb17cccb8287-lbhqj" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-29 10:35:40 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-29 10:35:43 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-29 10:35:43 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-29 10:35:39 +0000 UTC Reason: Message:}]) Sep 29 10:35:44.958: INFO: Trying to dial the pod Sep 29 10:35:50.012: INFO: Controller my-hostname-basic-20276d67-ed71-41e4-8879-fb17cccb8287: Got expected result from replica 1 [my-hostname-basic-20276d67-ed71-41e4-8879-fb17cccb8287-lbhqj]: "my-hostname-basic-20276d67-ed71-41e4-8879-fb17cccb8287-lbhqj", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:35:50.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-469" for this suite. • [SLOW TEST:10.406 seconds] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":303,"completed":9,"skipped":173,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:35:50.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 29 10:35:50.685: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 29 10:35:52.720: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736972550, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736972550, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736972550, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736972550, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 29 10:35:54.724: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736972550, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736972550, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736972550, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736972550, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 29 10:35:57.752: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:35:57.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2849" for this suite. STEP: Destroying namespace "webhook-2849-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.210 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":303,"completed":10,"skipped":188,"failed":0} SSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:35:58.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-003d3989-be20-42b4-8336-df6bf88243a4 STEP: Creating a pod to test consume configMaps Sep 29 10:35:58.383: INFO: Waiting up to 5m0s for pod "pod-configmaps-d499c0bd-c0e6-479e-aa5b-c76e835e9fc3" in namespace "configmap-9600" to be "Succeeded or Failed" Sep 29 10:35:58.527: INFO: Pod "pod-configmaps-d499c0bd-c0e6-479e-aa5b-c76e835e9fc3": Phase="Pending", Reason="", readiness=false. Elapsed: 144.602064ms Sep 29 10:36:00.532: INFO: Pod "pod-configmaps-d499c0bd-c0e6-479e-aa5b-c76e835e9fc3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.149061879s Sep 29 10:36:02.535: INFO: Pod "pod-configmaps-d499c0bd-c0e6-479e-aa5b-c76e835e9fc3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.152863508s STEP: Saw pod success Sep 29 10:36:02.536: INFO: Pod "pod-configmaps-d499c0bd-c0e6-479e-aa5b-c76e835e9fc3" satisfied condition "Succeeded or Failed" Sep 29 10:36:02.538: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-d499c0bd-c0e6-479e-aa5b-c76e835e9fc3 container configmap-volume-test: STEP: delete the pod Sep 29 10:36:02.691: INFO: Waiting for pod pod-configmaps-d499c0bd-c0e6-479e-aa5b-c76e835e9fc3 to disappear Sep 29 10:36:02.696: INFO: Pod pod-configmaps-d499c0bd-c0e6-479e-aa5b-c76e835e9fc3 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:36:02.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9600" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":303,"completed":11,"skipped":192,"failed":0} SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:36:02.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-7d8121ea-fc66-409b-8588-24fa4d201185 STEP: Creating a pod to test consume configMaps Sep 29 10:36:03.051: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6bd95dd9-ca76-42eb-b65e-3bce2e9076df" in namespace "projected-4785" to be "Succeeded or Failed" Sep 29 10:36:03.114: INFO: Pod "pod-projected-configmaps-6bd95dd9-ca76-42eb-b65e-3bce2e9076df": Phase="Pending", Reason="", readiness=false. Elapsed: 62.380081ms Sep 29 10:36:05.117: INFO: Pod "pod-projected-configmaps-6bd95dd9-ca76-42eb-b65e-3bce2e9076df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065548234s Sep 29 10:36:07.122: INFO: Pod "pod-projected-configmaps-6bd95dd9-ca76-42eb-b65e-3bce2e9076df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.071065576s STEP: Saw pod success Sep 29 10:36:07.122: INFO: Pod "pod-projected-configmaps-6bd95dd9-ca76-42eb-b65e-3bce2e9076df" satisfied condition "Succeeded or Failed" Sep 29 10:36:07.125: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-6bd95dd9-ca76-42eb-b65e-3bce2e9076df container projected-configmap-volume-test: STEP: delete the pod Sep 29 10:36:07.152: INFO: Waiting for pod pod-projected-configmaps-6bd95dd9-ca76-42eb-b65e-3bce2e9076df to disappear Sep 29 10:36:07.164: INFO: Pod pod-projected-configmaps-6bd95dd9-ca76-42eb-b65e-3bce2e9076df no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:36:07.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4785" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":303,"completed":12,"skipped":197,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:36:07.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-2348 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating stateful set ss in namespace statefulset-2348 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-2348 Sep 29 10:36:07.523: INFO: Found 0 stateful pods, waiting for 1 Sep 29 10:36:17.527: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Sep 29 10:36:17.531: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2348 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 29 10:36:20.258: INFO: stderr: "I0929 10:36:20.151010 28 log.go:181] (0xc00003a420) (0xc000ca6000) Create stream\nI0929 10:36:20.151072 28 log.go:181] (0xc00003a420) (0xc000ca6000) Stream added, broadcasting: 1\nI0929 10:36:20.153437 28 log.go:181] (0xc00003a420) Reply frame received for 1\nI0929 10:36:20.153499 28 log.go:181] (0xc00003a420) (0xc000ca60a0) Create stream\nI0929 10:36:20.153520 28 log.go:181] (0xc00003a420) (0xc000ca60a0) Stream added, broadcasting: 3\nI0929 10:36:20.154514 28 log.go:181] (0xc00003a420) Reply frame received for 3\nI0929 10:36:20.154565 28 log.go:181] (0xc00003a420) (0xc000d16000) Create stream\nI0929 10:36:20.154571 28 log.go:181] (0xc00003a420) (0xc000d16000) Stream added, broadcasting: 5\nI0929 10:36:20.155472 28 log.go:181] (0xc00003a420) Reply frame received for 5\nI0929 10:36:20.217592 28 log.go:181] (0xc00003a420) Data frame received for 5\nI0929 10:36:20.217612 28 log.go:181] (0xc000d16000) (5) Data frame handling\nI0929 10:36:20.217624 28 log.go:181] (0xc000d16000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0929 10:36:20.250863 28 log.go:181] (0xc00003a420) Data frame received for 3\nI0929 10:36:20.250894 28 log.go:181] (0xc000ca60a0) (3) Data frame handling\nI0929 10:36:20.250922 28 log.go:181] (0xc000ca60a0) (3) Data frame sent\nI0929 10:36:20.251199 28 log.go:181] (0xc00003a420) Data frame received for 5\nI0929 10:36:20.251225 28 log.go:181] (0xc000d16000) (5) Data frame handling\nI0929 10:36:20.251254 28 log.go:181] (0xc00003a420) Data frame received for 3\nI0929 10:36:20.251271 28 log.go:181] (0xc000ca60a0) (3) Data frame handling\nI0929 10:36:20.253343 28 log.go:181] (0xc00003a420) Data frame received for 1\nI0929 10:36:20.253367 28 log.go:181] (0xc000ca6000) (1) Data frame handling\nI0929 10:36:20.253386 28 log.go:181] (0xc000ca6000) (1) Data frame sent\nI0929 10:36:20.253398 28 log.go:181] (0xc00003a420) (0xc000ca6000) Stream removed, broadcasting: 1\nI0929 10:36:20.253452 28 log.go:181] (0xc00003a420) Go away received\nI0929 10:36:20.253752 28 log.go:181] (0xc00003a420) (0xc000ca6000) Stream removed, broadcasting: 1\nI0929 10:36:20.253770 28 log.go:181] (0xc00003a420) (0xc000ca60a0) Stream removed, broadcasting: 3\nI0929 10:36:20.253780 28 log.go:181] (0xc00003a420) (0xc000d16000) Stream removed, broadcasting: 5\n" Sep 29 10:36:20.259: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 29 10:36:20.259: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 29 10:36:20.262: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Sep 29 10:36:30.266: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Sep 29 10:36:30.266: INFO: Waiting for statefulset status.replicas updated to 0 Sep 29 10:36:30.278: INFO: POD NODE PHASE GRACE CONDITIONS Sep 29 10:36:30.278: INFO: ss-0 kali-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-29 10:36:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-29 10:36:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-29 10:36:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-29 10:36:07 +0000 UTC }] Sep 29 10:36:30.278: INFO: Sep 29 10:36:30.278: INFO: StatefulSet ss has not reached scale 3, at 1 Sep 29 10:36:31.283: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.995789343s Sep 29 10:36:32.289: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.990885387s Sep 29 10:36:33.514: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.985061827s Sep 29 10:36:34.517: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.760439472s Sep 29 10:36:35.522: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.756598397s Sep 29 10:36:36.527: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.751936095s Sep 29 10:36:37.563: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.746598076s Sep 29 10:36:38.569: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.710595549s Sep 29 10:36:39.573: INFO: Verifying statefulset ss doesn't scale past 3 for another 704.976583ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-2348 Sep 29 10:36:40.578: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2348 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 29 10:36:40.809: INFO: stderr: "I0929 10:36:40.717831 47 log.go:181] (0xc0009c0fd0) (0xc0003bd900) Create stream\nI0929 10:36:40.717882 47 log.go:181] (0xc0009c0fd0) (0xc0003bd900) Stream added, broadcasting: 1\nI0929 10:36:40.723432 47 log.go:181] (0xc0009c0fd0) Reply frame received for 1\nI0929 10:36:40.723477 47 log.go:181] (0xc0009c0fd0) (0xc000a1e500) Create stream\nI0929 10:36:40.723490 47 log.go:181] (0xc0009c0fd0) (0xc000a1e500) Stream added, broadcasting: 3\nI0929 10:36:40.724422 47 log.go:181] (0xc0009c0fd0) Reply frame received for 3\nI0929 10:36:40.724461 47 log.go:181] (0xc0009c0fd0) (0xc000d1c000) Create stream\nI0929 10:36:40.724473 47 log.go:181] (0xc0009c0fd0) (0xc000d1c000) Stream added, broadcasting: 5\nI0929 10:36:40.725473 47 log.go:181] (0xc0009c0fd0) Reply frame received for 5\nI0929 10:36:40.802622 47 log.go:181] (0xc0009c0fd0) Data frame received for 5\nI0929 10:36:40.802670 47 log.go:181] (0xc0009c0fd0) Data frame received for 3\nI0929 10:36:40.802714 47 log.go:181] (0xc000a1e500) (3) Data frame handling\nI0929 10:36:40.802729 47 log.go:181] (0xc000a1e500) (3) Data frame sent\nI0929 10:36:40.802744 47 log.go:181] (0xc0009c0fd0) Data frame received for 3\nI0929 10:36:40.802753 47 log.go:181] (0xc000a1e500) (3) Data frame handling\nI0929 10:36:40.802773 47 log.go:181] (0xc000d1c000) (5) Data frame handling\nI0929 10:36:40.802798 47 log.go:181] (0xc000d1c000) (5) Data frame sent\nI0929 10:36:40.802810 47 log.go:181] (0xc0009c0fd0) Data frame received for 5\nI0929 10:36:40.802820 47 log.go:181] (0xc000d1c000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0929 10:36:40.804172 47 log.go:181] (0xc0009c0fd0) Data frame received for 1\nI0929 10:36:40.804187 47 log.go:181] (0xc0003bd900) (1) Data frame handling\nI0929 10:36:40.804194 47 log.go:181] (0xc0003bd900) (1) Data frame sent\nI0929 10:36:40.804212 47 log.go:181] (0xc0009c0fd0) (0xc0003bd900) Stream removed, broadcasting: 1\nI0929 10:36:40.804229 47 log.go:181] (0xc0009c0fd0) Go away received\nI0929 10:36:40.804655 47 log.go:181] (0xc0009c0fd0) (0xc0003bd900) Stream removed, broadcasting: 1\nI0929 10:36:40.804680 47 log.go:181] (0xc0009c0fd0) (0xc000a1e500) Stream removed, broadcasting: 3\nI0929 10:36:40.804697 47 log.go:181] (0xc0009c0fd0) (0xc000d1c000) Stream removed, broadcasting: 5\n" Sep 29 10:36:40.809: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 29 10:36:40.810: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 29 10:36:40.810: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2348 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 29 10:36:41.029: INFO: stderr: "I0929 10:36:40.933590 65 log.go:181] (0xc00003a420) (0xc0009ce000) Create stream\nI0929 10:36:40.933657 65 log.go:181] (0xc00003a420) (0xc0009ce000) Stream added, broadcasting: 1\nI0929 10:36:40.935510 65 log.go:181] (0xc00003a420) Reply frame received for 1\nI0929 10:36:40.935565 65 log.go:181] (0xc00003a420) (0xc0009ce0a0) Create stream\nI0929 10:36:40.935594 65 log.go:181] (0xc00003a420) (0xc0009ce0a0) Stream added, broadcasting: 3\nI0929 10:36:40.936572 65 log.go:181] (0xc00003a420) Reply frame received for 3\nI0929 10:36:40.936614 65 log.go:181] (0xc00003a420) (0xc0009ce140) Create stream\nI0929 10:36:40.936625 65 log.go:181] (0xc00003a420) (0xc0009ce140) Stream added, broadcasting: 5\nI0929 10:36:40.937557 65 log.go:181] (0xc00003a420) Reply frame received for 5\nI0929 10:36:41.021809 65 log.go:181] (0xc00003a420) Data frame received for 3\nI0929 10:36:41.021852 65 log.go:181] (0xc0009ce0a0) (3) Data frame handling\nI0929 10:36:41.021868 65 log.go:181] (0xc0009ce0a0) (3) Data frame sent\nI0929 10:36:41.021879 65 log.go:181] (0xc00003a420) Data frame received for 3\nI0929 10:36:41.021891 65 log.go:181] (0xc0009ce0a0) (3) Data frame handling\nI0929 10:36:41.021941 65 log.go:181] (0xc00003a420) Data frame received for 5\nI0929 10:36:41.021975 65 log.go:181] (0xc0009ce140) (5) Data frame handling\nI0929 10:36:41.021995 65 log.go:181] (0xc0009ce140) (5) Data frame sent\nI0929 10:36:41.022015 65 log.go:181] (0xc00003a420) Data frame received for 5\nI0929 10:36:41.022031 65 log.go:181] (0xc0009ce140) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0929 10:36:41.023853 65 log.go:181] (0xc00003a420) Data frame received for 1\nI0929 10:36:41.023881 65 log.go:181] (0xc0009ce000) (1) Data frame handling\nI0929 10:36:41.023904 65 log.go:181] (0xc0009ce000) (1) Data frame sent\nI0929 10:36:41.023929 65 log.go:181] (0xc00003a420) (0xc0009ce000) Stream removed, broadcasting: 1\nI0929 10:36:41.024468 65 log.go:181] (0xc00003a420) (0xc0009ce000) Stream removed, broadcasting: 1\nI0929 10:36:41.024492 65 log.go:181] (0xc00003a420) (0xc0009ce0a0) Stream removed, broadcasting: 3\nI0929 10:36:41.024665 65 log.go:181] (0xc00003a420) (0xc0009ce140) Stream removed, broadcasting: 5\n" Sep 29 10:36:41.029: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 29 10:36:41.029: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 29 10:36:41.030: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2348 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 29 10:36:41.261: INFO: stderr: "I0929 10:36:41.176202 83 log.go:181] (0xc000da6000) (0xc0008b4000) Create stream\nI0929 10:36:41.176279 83 log.go:181] (0xc000da6000) (0xc0008b4000) Stream added, broadcasting: 1\nI0929 10:36:41.181018 83 log.go:181] (0xc000da6000) Reply frame received for 1\nI0929 10:36:41.181080 83 log.go:181] (0xc000da6000) (0xc0008b40a0) Create stream\nI0929 10:36:41.181098 83 log.go:181] (0xc000da6000) (0xc0008b40a0) Stream added, broadcasting: 3\nI0929 10:36:41.182118 83 log.go:181] (0xc000da6000) Reply frame received for 3\nI0929 10:36:41.182156 83 log.go:181] (0xc000da6000) (0xc0000cca00) Create stream\nI0929 10:36:41.182166 83 log.go:181] (0xc000da6000) (0xc0000cca00) Stream added, broadcasting: 5\nI0929 10:36:41.182953 83 log.go:181] (0xc000da6000) Reply frame received for 5\nI0929 10:36:41.253806 83 log.go:181] (0xc000da6000) Data frame received for 5\nI0929 10:36:41.253852 83 log.go:181] (0xc0000cca00) (5) Data frame handling\nI0929 10:36:41.253868 83 log.go:181] (0xc0000cca00) (5) Data frame sent\nI0929 10:36:41.253878 83 log.go:181] (0xc000da6000) Data frame received for 5\nI0929 10:36:41.253885 83 log.go:181] (0xc0000cca00) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0929 10:36:41.253932 83 log.go:181] (0xc000da6000) Data frame received for 3\nI0929 10:36:41.253974 83 log.go:181] (0xc0008b40a0) (3) Data frame handling\nI0929 10:36:41.254010 83 log.go:181] (0xc0008b40a0) (3) Data frame sent\nI0929 10:36:41.254032 83 log.go:181] (0xc000da6000) Data frame received for 3\nI0929 10:36:41.254053 83 log.go:181] (0xc0008b40a0) (3) Data frame handling\nI0929 10:36:41.255455 83 log.go:181] (0xc000da6000) Data frame received for 1\nI0929 10:36:41.255476 83 log.go:181] (0xc0008b4000) (1) Data frame handling\nI0929 10:36:41.255488 83 log.go:181] (0xc0008b4000) (1) Data frame sent\nI0929 10:36:41.255565 83 log.go:181] (0xc000da6000) (0xc0008b4000) Stream removed, broadcasting: 1\nI0929 10:36:41.255630 83 log.go:181] (0xc000da6000) Go away received\nI0929 10:36:41.255905 83 log.go:181] (0xc000da6000) (0xc0008b4000) Stream removed, broadcasting: 1\nI0929 10:36:41.255924 83 log.go:181] (0xc000da6000) (0xc0008b40a0) Stream removed, broadcasting: 3\nI0929 10:36:41.255935 83 log.go:181] (0xc000da6000) (0xc0000cca00) Stream removed, broadcasting: 5\n" Sep 29 10:36:41.261: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 29 10:36:41.261: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 29 10:36:41.265: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Sep 29 10:36:51.271: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Sep 29 10:36:51.271: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Sep 29 10:36:51.271: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Sep 29 10:36:51.275: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2348 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 29 10:36:51.520: INFO: stderr: "I0929 10:36:51.417388 101 log.go:181] (0xc000e411e0) (0xc000930960) Create stream\nI0929 10:36:51.417445 101 log.go:181] (0xc000e411e0) (0xc000930960) Stream added, broadcasting: 1\nI0929 10:36:51.422259 101 log.go:181] (0xc000e411e0) Reply frame received for 1\nI0929 10:36:51.422300 101 log.go:181] (0xc000e411e0) (0xc000911ea0) Create stream\nI0929 10:36:51.422309 101 log.go:181] (0xc000e411e0) (0xc000911ea0) Stream added, broadcasting: 3\nI0929 10:36:51.423244 101 log.go:181] (0xc000e411e0) Reply frame received for 3\nI0929 10:36:51.423288 101 log.go:181] (0xc000e411e0) (0xc00079e140) Create stream\nI0929 10:36:51.423305 101 log.go:181] (0xc000e411e0) (0xc00079e140) Stream added, broadcasting: 5\nI0929 10:36:51.424307 101 log.go:181] (0xc000e411e0) Reply frame received for 5\nI0929 10:36:51.512575 101 log.go:181] (0xc000e411e0) Data frame received for 5\nI0929 10:36:51.512611 101 log.go:181] (0xc00079e140) (5) Data frame handling\nI0929 10:36:51.512628 101 log.go:181] (0xc00079e140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0929 10:36:51.512645 101 log.go:181] (0xc000e411e0) Data frame received for 3\nI0929 10:36:51.512653 101 log.go:181] (0xc000911ea0) (3) Data frame handling\nI0929 10:36:51.512659 101 log.go:181] (0xc000911ea0) (3) Data frame sent\nI0929 10:36:51.512738 101 log.go:181] (0xc000e411e0) Data frame received for 3\nI0929 10:36:51.512754 101 log.go:181] (0xc000911ea0) (3) Data frame handling\nI0929 10:36:51.512779 101 log.go:181] (0xc000e411e0) Data frame received for 5\nI0929 10:36:51.512793 101 log.go:181] (0xc00079e140) (5) Data frame handling\nI0929 10:36:51.515144 101 log.go:181] (0xc000e411e0) Data frame received for 1\nI0929 10:36:51.515162 101 log.go:181] (0xc000930960) (1) Data frame handling\nI0929 10:36:51.515173 101 log.go:181] (0xc000930960) (1) Data frame sent\nI0929 10:36:51.515374 101 log.go:181] (0xc000e411e0) (0xc000930960) Stream removed, broadcasting: 1\nI0929 10:36:51.515579 101 log.go:181] (0xc000e411e0) Go away received\nI0929 10:36:51.515966 101 log.go:181] (0xc000e411e0) (0xc000930960) Stream removed, broadcasting: 1\nI0929 10:36:51.515991 101 log.go:181] (0xc000e411e0) (0xc000911ea0) Stream removed, broadcasting: 3\nI0929 10:36:51.516003 101 log.go:181] (0xc000e411e0) (0xc00079e140) Stream removed, broadcasting: 5\n" Sep 29 10:36:51.520: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 29 10:36:51.520: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 29 10:36:51.520: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2348 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 29 10:36:51.768: INFO: stderr: "I0929 10:36:51.650571 119 log.go:181] (0xc00064b4a0) (0xc000642a00) Create stream\nI0929 10:36:51.650630 119 log.go:181] (0xc00064b4a0) (0xc000642a00) Stream added, broadcasting: 1\nI0929 10:36:51.655411 119 log.go:181] (0xc00064b4a0) Reply frame received for 1\nI0929 10:36:51.655451 119 log.go:181] (0xc00064b4a0) (0xc000cbc0a0) Create stream\nI0929 10:36:51.655463 119 log.go:181] (0xc00064b4a0) (0xc000cbc0a0) Stream added, broadcasting: 3\nI0929 10:36:51.656453 119 log.go:181] (0xc00064b4a0) Reply frame received for 3\nI0929 10:36:51.656480 119 log.go:181] (0xc00064b4a0) (0xc000642000) Create stream\nI0929 10:36:51.656487 119 log.go:181] (0xc00064b4a0) (0xc000642000) Stream added, broadcasting: 5\nI0929 10:36:51.657476 119 log.go:181] (0xc00064b4a0) Reply frame received for 5\nI0929 10:36:51.725877 119 log.go:181] (0xc00064b4a0) Data frame received for 5\nI0929 10:36:51.725901 119 log.go:181] (0xc000642000) (5) Data frame handling\nI0929 10:36:51.725915 119 log.go:181] (0xc000642000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0929 10:36:51.759143 119 log.go:181] (0xc00064b4a0) Data frame received for 3\nI0929 10:36:51.759182 119 log.go:181] (0xc000cbc0a0) (3) Data frame handling\nI0929 10:36:51.759209 119 log.go:181] (0xc000cbc0a0) (3) Data frame sent\nI0929 10:36:51.759224 119 log.go:181] (0xc00064b4a0) Data frame received for 3\nI0929 10:36:51.759236 119 log.go:181] (0xc000cbc0a0) (3) Data frame handling\nI0929 10:36:51.759407 119 log.go:181] (0xc00064b4a0) Data frame received for 5\nI0929 10:36:51.759441 119 log.go:181] (0xc000642000) (5) Data frame handling\nI0929 10:36:51.761085 119 log.go:181] (0xc00064b4a0) Data frame received for 1\nI0929 10:36:51.761115 119 log.go:181] (0xc000642a00) (1) Data frame handling\nI0929 10:36:51.761138 119 log.go:181] (0xc000642a00) (1) Data frame sent\nI0929 10:36:51.761162 119 log.go:181] (0xc00064b4a0) (0xc000642a00) Stream removed, broadcasting: 1\nI0929 10:36:51.761187 119 log.go:181] (0xc00064b4a0) Go away received\nI0929 10:36:51.761711 119 log.go:181] (0xc00064b4a0) (0xc000642a00) Stream removed, broadcasting: 1\nI0929 10:36:51.761731 119 log.go:181] (0xc00064b4a0) (0xc000cbc0a0) Stream removed, broadcasting: 3\nI0929 10:36:51.761744 119 log.go:181] (0xc00064b4a0) (0xc000642000) Stream removed, broadcasting: 5\n" Sep 29 10:36:51.768: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 29 10:36:51.768: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 29 10:36:51.768: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2348 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 29 10:36:52.053: INFO: stderr: "I0929 10:36:51.940658 137 log.go:181] (0xc00018d8c0) (0xc00062c960) Create stream\nI0929 10:36:51.940727 137 log.go:181] (0xc00018d8c0) (0xc00062c960) Stream added, broadcasting: 1\nI0929 10:36:51.946329 137 log.go:181] (0xc00018d8c0) Reply frame received for 1\nI0929 10:36:51.946374 137 log.go:181] (0xc00018d8c0) (0xc000cc40a0) Create stream\nI0929 10:36:51.946389 137 log.go:181] (0xc00018d8c0) (0xc000cc40a0) Stream added, broadcasting: 3\nI0929 10:36:51.947338 137 log.go:181] (0xc00018d8c0) Reply frame received for 3\nI0929 10:36:51.947393 137 log.go:181] (0xc00018d8c0) (0xc000cc4140) Create stream\nI0929 10:36:51.947420 137 log.go:181] (0xc00018d8c0) (0xc000cc4140) Stream added, broadcasting: 5\nI0929 10:36:51.948260 137 log.go:181] (0xc00018d8c0) Reply frame received for 5\nI0929 10:36:52.005842 137 log.go:181] (0xc00018d8c0) Data frame received for 5\nI0929 10:36:52.005887 137 log.go:181] (0xc000cc4140) (5) Data frame handling\nI0929 10:36:52.005901 137 log.go:181] (0xc000cc4140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0929 10:36:52.047834 137 log.go:181] (0xc00018d8c0) Data frame received for 3\nI0929 10:36:52.047886 137 log.go:181] (0xc000cc40a0) (3) Data frame handling\nI0929 10:36:52.047928 137 log.go:181] (0xc000cc40a0) (3) Data frame sent\nI0929 10:36:52.047950 137 log.go:181] (0xc00018d8c0) Data frame received for 3\nI0929 10:36:52.047967 137 log.go:181] (0xc000cc40a0) (3) Data frame handling\nI0929 10:36:52.048131 137 log.go:181] (0xc00018d8c0) Data frame received for 5\nI0929 10:36:52.048164 137 log.go:181] (0xc000cc4140) (5) Data frame handling\nI0929 10:36:52.049554 137 log.go:181] (0xc00018d8c0) Data frame received for 1\nI0929 10:36:52.049575 137 log.go:181] (0xc00062c960) (1) Data frame handling\nI0929 10:36:52.049595 137 log.go:181] (0xc00062c960) (1) Data frame sent\nI0929 10:36:52.049603 137 log.go:181] (0xc00018d8c0) (0xc00062c960) Stream removed, broadcasting: 1\nI0929 10:36:52.049820 137 log.go:181] (0xc00018d8c0) Go away received\nI0929 10:36:52.049935 137 log.go:181] (0xc00018d8c0) (0xc00062c960) Stream removed, broadcasting: 1\nI0929 10:36:52.050046 137 log.go:181] (0xc00018d8c0) (0xc000cc40a0) Stream removed, broadcasting: 3\nI0929 10:36:52.050060 137 log.go:181] (0xc00018d8c0) (0xc000cc4140) Stream removed, broadcasting: 5\n" Sep 29 10:36:52.053: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 29 10:36:52.054: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 29 10:36:52.054: INFO: Waiting for statefulset status.replicas updated to 0 Sep 29 10:36:52.057: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Sep 29 10:37:02.084: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Sep 29 10:37:02.085: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Sep 29 10:37:02.085: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Sep 29 10:37:02.123: INFO: POD NODE PHASE GRACE CONDITIONS Sep 29 10:37:02.123: INFO: ss-0 kali-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-29 10:36:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-29 10:36:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-29 10:36:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-29 10:36:07 +0000 UTC }] Sep 29 10:37:02.123: INFO: ss-1 kali-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-29 10:36:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-29 10:36:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-29 10:36:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-29 10:36:30 +0000 UTC }] Sep 29 10:37:02.123: INFO: ss-2 kali-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-29 10:36:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-29 10:36:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-29 10:36:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-29 10:36:30 +0000 UTC }] Sep 29 10:37:02.123: INFO: Sep 29 10:37:02.123: INFO: StatefulSet ss has not reached scale 0, at 3 Sep 29 10:37:03.253: INFO: POD NODE PHASE GRACE CONDITIONS Sep 29 10:37:03.253: INFO: ss-0 kali-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-29 10:36:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-29 10:36:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-29 10:36:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-29 10:36:07 +0000 UTC }] Sep 29 10:37:03.253: INFO: ss-1 kali-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-29 10:36:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-29 10:36:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-29 10:36:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-29 10:36:30 +0000 UTC }] Sep 29 10:37:03.253: INFO: ss-2 kali-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-29 10:36:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-29 10:36:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-29 10:36:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-29 10:36:30 +0000 UTC }] Sep 29 10:37:03.253: INFO: Sep 29 10:37:03.253: INFO: StatefulSet ss has not reached scale 0, at 3 Sep 29 10:37:04.257: INFO: POD NODE PHASE GRACE CONDITIONS Sep 29 10:37:04.257: INFO: ss-0 kali-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-29 10:36:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-29 10:36:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-29 10:36:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-29 10:36:07 +0000 UTC }] Sep 29 10:37:04.257: INFO: ss-1 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-29 10:36:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-29 10:36:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-29 10:36:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-29 10:36:30 +0000 UTC }] Sep 29 10:37:04.257: INFO: ss-2 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-29 10:36:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-29 10:36:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-29 10:36:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-29 10:36:30 +0000 UTC }] Sep 29 10:37:04.257: INFO: Sep 29 10:37:04.257: INFO: StatefulSet ss has not reached scale 0, at 3 Sep 29 10:37:05.262: INFO: POD NODE PHASE GRACE CONDITIONS Sep 29 10:37:05.262: INFO: ss-0 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-29 10:36:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-29 10:36:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-29 10:36:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-29 10:36:07 +0000 UTC }] Sep 29 10:37:05.262: INFO: Sep 29 10:37:05.262: INFO: StatefulSet ss has not reached scale 0, at 1 Sep 29 10:37:06.266: INFO: POD NODE PHASE GRACE CONDITIONS Sep 29 10:37:06.266: INFO: ss-0 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-29 10:36:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-29 10:36:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-29 10:36:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-29 10:36:07 +0000 UTC }] Sep 29 10:37:06.267: INFO: Sep 29 10:37:06.267: INFO: StatefulSet ss has not reached scale 0, at 1 Sep 29 10:37:07.271: INFO: POD NODE PHASE GRACE CONDITIONS Sep 29 10:37:07.271: INFO: ss-0 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-29 10:36:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-29 10:36:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-29 10:36:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-29 10:36:07 +0000 UTC }] Sep 29 10:37:07.271: INFO: Sep 29 10:37:07.271: INFO: StatefulSet ss has not reached scale 0, at 1 Sep 29 10:37:08.274: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.835841246s Sep 29 10:37:09.279: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.83225556s Sep 29 10:37:10.455: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.827228313s Sep 29 10:37:11.460: INFO: Verifying statefulset ss doesn't scale past 0 for another 651.211283ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-2348 Sep 29 10:37:12.464: INFO: Scaling statefulset ss to 0 Sep 29 10:37:12.525: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Sep 29 10:37:12.529: INFO: Deleting all statefulset in ns statefulset-2348 Sep 29 10:37:12.531: INFO: Scaling statefulset ss to 0 Sep 29 10:37:12.538: INFO: Waiting for statefulset status.replicas updated to 0 Sep 29 10:37:12.540: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:37:12.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2348" for this suite. • [SLOW TEST:65.113 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":303,"completed":13,"skipped":255,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:37:12.558: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 29 10:37:13.096: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 29 10:37:15.122: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736972633, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736972633, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736972633, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736972633, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 29 10:37:18.230: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:37:18.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8225" for this suite. STEP: Destroying namespace "webhook-8225-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.955 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":303,"completed":14,"skipped":257,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:37:18.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:37:29.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2526" for this suite. • [SLOW TEST:11.173 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":303,"completed":15,"skipped":268,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:37:29.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 29 10:37:30.055: INFO: Waiting up to 5m0s for pod "downwardapi-volume-31d06709-7d94-4399-95c0-fe219e26fa91" in namespace "downward-api-6842" to be "Succeeded or Failed" Sep 29 10:37:30.193: INFO: Pod "downwardapi-volume-31d06709-7d94-4399-95c0-fe219e26fa91": Phase="Pending", Reason="", readiness=false. Elapsed: 138.338419ms Sep 29 10:37:32.197: INFO: Pod "downwardapi-volume-31d06709-7d94-4399-95c0-fe219e26fa91": Phase="Pending", Reason="", readiness=false. Elapsed: 2.142457265s Sep 29 10:37:34.202: INFO: Pod "downwardapi-volume-31d06709-7d94-4399-95c0-fe219e26fa91": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.14676291s STEP: Saw pod success Sep 29 10:37:34.202: INFO: Pod "downwardapi-volume-31d06709-7d94-4399-95c0-fe219e26fa91" satisfied condition "Succeeded or Failed" Sep 29 10:37:34.205: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-31d06709-7d94-4399-95c0-fe219e26fa91 container client-container: STEP: delete the pod Sep 29 10:37:34.357: INFO: Waiting for pod downwardapi-volume-31d06709-7d94-4399-95c0-fe219e26fa91 to disappear Sep 29 10:37:34.362: INFO: Pod downwardapi-volume-31d06709-7d94-4399-95c0-fe219e26fa91 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:37:34.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6842" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":303,"completed":16,"skipped":301,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:37:34.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-2ba615c2-ecab-4373-b1a3-240484dd05f4 STEP: Creating configMap with name cm-test-opt-upd-1552c9c6-f61d-4a10-965c-2212978b9aa4 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-2ba615c2-ecab-4373-b1a3-240484dd05f4 STEP: Updating configmap cm-test-opt-upd-1552c9c6-f61d-4a10-965c-2212978b9aa4 STEP: Creating configMap with name cm-test-opt-create-8f47d176-b97c-43be-8b10-636fb803f7d9 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:37:44.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2060" for this suite. • [SLOW TEST:10.227 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":17,"skipped":316,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:37:44.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Sep 29 10:37:49.214: INFO: Successfully updated pod "labelsupdate33a3c626-041a-44d4-b366-3c4679cc76bf" [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:37:51.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4660" for this suite. • [SLOW TEST:6.637 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":303,"completed":18,"skipped":331,"failed":0} [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:37:51.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-ec568b4f-2d1e-412a-a387-8d7a84d6b038 STEP: Creating a pod to test consume secrets Sep 29 10:37:51.382: INFO: Waiting up to 5m0s for pod "pod-secrets-84683e13-2eae-4864-a049-d9708d222ba9" in namespace "secrets-9565" to be "Succeeded or Failed" Sep 29 10:37:51.403: INFO: Pod "pod-secrets-84683e13-2eae-4864-a049-d9708d222ba9": Phase="Pending", Reason="", readiness=false. Elapsed: 20.738578ms Sep 29 10:37:53.408: INFO: Pod "pod-secrets-84683e13-2eae-4864-a049-d9708d222ba9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025849138s Sep 29 10:37:55.415: INFO: Pod "pod-secrets-84683e13-2eae-4864-a049-d9708d222ba9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032450008s STEP: Saw pod success Sep 29 10:37:55.415: INFO: Pod "pod-secrets-84683e13-2eae-4864-a049-d9708d222ba9" satisfied condition "Succeeded or Failed" Sep 29 10:37:55.418: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-84683e13-2eae-4864-a049-d9708d222ba9 container secret-volume-test: STEP: delete the pod Sep 29 10:37:55.465: INFO: Waiting for pod pod-secrets-84683e13-2eae-4864-a049-d9708d222ba9 to disappear Sep 29 10:37:55.508: INFO: Pod pod-secrets-84683e13-2eae-4864-a049-d9708d222ba9 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:37:55.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9565" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":19,"skipped":331,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:37:55.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check is all data is printed [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 29 10:37:55.629: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config version' Sep 29 10:37:55.771: INFO: stderr: "" Sep 29 10:37:55.771: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19\", GitVersion:\"v1.19.2\", GitCommit:\"f5743093fd1c663cb0cbc89748f730662345d44d\", GitTreeState:\"clean\", BuildDate:\"2020-09-16T13:41:02Z\", GoVersion:\"go1.15\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"19\", GitVersion:\"v1.19.0\", GitCommit:\"e19964183377d0ec2052d1f1fa930c4d7575bd50\", GitTreeState:\"clean\", BuildDate:\"2020-08-28T22:11:08Z\", GoVersion:\"go1.15\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:37:55.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4758" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":303,"completed":20,"skipped":333,"failed":0} SS ------------------------------ [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:37:55.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod with failed condition STEP: updating the pod Sep 29 10:39:56.397: INFO: Successfully updated pod "var-expansion-6f737a38-bd20-46c7-a815-754450ba4bc1" STEP: waiting for pod running STEP: deleting the pod gracefully Sep 29 10:40:00.433: INFO: Deleting pod "var-expansion-6f737a38-bd20-46c7-a815-754450ba4bc1" in namespace "var-expansion-9380" Sep 29 10:40:00.437: INFO: Wait up to 5m0s for pod "var-expansion-6f737a38-bd20-46c7-a815-754450ba4bc1" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:40:40.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9380" for this suite. • [SLOW TEST:164.710 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":303,"completed":21,"skipped":335,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Events should delete a collection of events [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Events /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:40:40.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of events [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of events Sep 29 10:40:40.595: INFO: created test-event-1 Sep 29 10:40:40.608: INFO: created test-event-2 Sep 29 10:40:40.614: INFO: created test-event-3 STEP: get a list of Events with a label in the current namespace STEP: delete collection of events Sep 29 10:40:40.622: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity Sep 29 10:40:40.664: INFO: requesting list of events to confirm quantity [AfterEach] [sig-api-machinery] Events /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:40:40.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-1012" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should delete a collection of events [Conformance]","total":303,"completed":22,"skipped":352,"failed":0} SSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:40:40.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 29 10:40:40.741: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Sep 29 10:40:42.907: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:40:43.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3366" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":303,"completed":23,"skipped":358,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:40:43.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:40:44.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3549" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":303,"completed":24,"skipped":362,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:40:44.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 29 10:40:45.034: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ae2c21cd-dd40-4d01-a1d6-0de7e34199fe" in namespace "projected-7032" to be "Succeeded or Failed" Sep 29 10:40:45.060: INFO: Pod "downwardapi-volume-ae2c21cd-dd40-4d01-a1d6-0de7e34199fe": Phase="Pending", Reason="", readiness=false. Elapsed: 25.9881ms Sep 29 10:40:47.109: INFO: Pod "downwardapi-volume-ae2c21cd-dd40-4d01-a1d6-0de7e34199fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075650853s Sep 29 10:40:49.123: INFO: Pod "downwardapi-volume-ae2c21cd-dd40-4d01-a1d6-0de7e34199fe": Phase="Running", Reason="", readiness=true. Elapsed: 4.089508741s Sep 29 10:40:51.126: INFO: Pod "downwardapi-volume-ae2c21cd-dd40-4d01-a1d6-0de7e34199fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.092358808s STEP: Saw pod success Sep 29 10:40:51.126: INFO: Pod "downwardapi-volume-ae2c21cd-dd40-4d01-a1d6-0de7e34199fe" satisfied condition "Succeeded or Failed" Sep 29 10:40:51.129: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-ae2c21cd-dd40-4d01-a1d6-0de7e34199fe container client-container: STEP: delete the pod Sep 29 10:40:51.273: INFO: Waiting for pod downwardapi-volume-ae2c21cd-dd40-4d01-a1d6-0de7e34199fe to disappear Sep 29 10:40:51.278: INFO: Pod downwardapi-volume-ae2c21cd-dd40-4d01-a1d6-0de7e34199fe no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:40:51.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7032" for this suite. • [SLOW TEST:6.403 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":25,"skipped":374,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:40:51.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override all Sep 29 10:40:51.415: INFO: Waiting up to 5m0s for pod "client-containers-7ced7192-615f-4ca8-9aae-e6d946b5a337" in namespace "containers-319" to be "Succeeded or Failed" Sep 29 10:40:51.417: INFO: Pod "client-containers-7ced7192-615f-4ca8-9aae-e6d946b5a337": Phase="Pending", Reason="", readiness=false. Elapsed: 2.466468ms Sep 29 10:40:53.421: INFO: Pod "client-containers-7ced7192-615f-4ca8-9aae-e6d946b5a337": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006187556s Sep 29 10:40:55.426: INFO: Pod "client-containers-7ced7192-615f-4ca8-9aae-e6d946b5a337": Phase="Running", Reason="", readiness=true. Elapsed: 4.010996196s Sep 29 10:40:57.431: INFO: Pod "client-containers-7ced7192-615f-4ca8-9aae-e6d946b5a337": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01580237s STEP: Saw pod success Sep 29 10:40:57.431: INFO: Pod "client-containers-7ced7192-615f-4ca8-9aae-e6d946b5a337" satisfied condition "Succeeded or Failed" Sep 29 10:40:57.434: INFO: Trying to get logs from node kali-worker2 pod client-containers-7ced7192-615f-4ca8-9aae-e6d946b5a337 container test-container: STEP: delete the pod Sep 29 10:40:57.466: INFO: Waiting for pod client-containers-7ced7192-615f-4ca8-9aae-e6d946b5a337 to disappear Sep 29 10:40:57.474: INFO: Pod client-containers-7ced7192-615f-4ca8-9aae-e6d946b5a337 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:40:57.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-319" for this suite. • [SLOW TEST:6.195 seconds] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":303,"completed":26,"skipped":388,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:40:57.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] deployment should delete old replica sets [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 29 10:40:57.590: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Sep 29 10:41:02.593: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Sep 29 10:41:02.593: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Sep 29 10:41:02.642: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-5901 /apis/apps/v1/namespaces/deployment-5901/deployments/test-cleanup-deployment 21dba97b-9df7-426a-9112-9c743c6d6e74 1594355 1 2020-09-29 10:41:02 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2020-09-29 10:41:02 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc000bbb758 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Sep 29 10:41:02.645: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. Sep 29 10:41:02.645: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Sep 29 10:41:02.645: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-5901 /apis/apps/v1/namespaces/deployment-5901/replicasets/test-cleanup-controller ea5a6c63-e011-4284-b8ce-0d240ee6f7e8 1594357 1 2020-09-29 10:40:57 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 21dba97b-9df7-426a-9112-9c743c6d6e74 0xc000bbbc77 0xc000bbbc78}] [] [{e2e.test Update apps/v1 2020-09-29 10:40:57 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-09-29 10:41:02 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"21dba97b-9df7-426a-9112-9c743c6d6e74\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc000bbbd68 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Sep 29 10:41:02.687: INFO: Pod "test-cleanup-controller-wnw4m" is available: &Pod{ObjectMeta:{test-cleanup-controller-wnw4m test-cleanup-controller- deployment-5901 /api/v1/namespaces/deployment-5901/pods/test-cleanup-controller-wnw4m d36526e8-cde0-48c8-b7e0-55cc0ae52eee 1594342 0 2020-09-29 10:40:57 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller ea5a6c63-e011-4284-b8ce-0d240ee6f7e8 0xc00338a317 0xc00338a318}] [] [{kube-controller-manager Update v1 2020-09-29 10:40:57 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ea5a6c63-e011-4284-b8ce-0d240ee6f7e8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-29 10:41:00 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.251\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xx88q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xx88q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xx88q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 10:40:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 10:41:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 10:41:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 10:40:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.2.251,StartTime:2020-09-29 10:40:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-29 10:40:59 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://d22d1092f6395cab98a24f29a23e5ce8167ae100e25ef6d8ddf0133d68b8d1c7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.251,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:41:02.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5901" for this suite. • [SLOW TEST:5.303 seconds] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":303,"completed":27,"skipped":422,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:41:02.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Sep 29 10:41:02.914: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-5893 /api/v1/namespaces/watch-5893/configmaps/e2e-watch-test-resource-version d273f565-0962-4fbd-9473-602a586393ad 1594378 0 2020-09-29 10:41:02 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-09-29 10:41:02 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Sep 29 10:41:02.914: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-5893 /api/v1/namespaces/watch-5893/configmaps/e2e-watch-test-resource-version d273f565-0962-4fbd-9473-602a586393ad 1594380 0 2020-09-29 10:41:02 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-09-29 10:41:02 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:41:02.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5893" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":303,"completed":28,"skipped":427,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:41:02.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-7e06ddfc-6a06-4c26-8adb-097e7b43d420 STEP: Creating a pod to test consume secrets Sep 29 10:41:03.107: INFO: Waiting up to 5m0s for pod "pod-secrets-dca3aeb7-7368-45d9-a712-834a95e37b87" in namespace "secrets-4839" to be "Succeeded or Failed" Sep 29 10:41:03.145: INFO: Pod "pod-secrets-dca3aeb7-7368-45d9-a712-834a95e37b87": Phase="Pending", Reason="", readiness=false. Elapsed: 37.818746ms Sep 29 10:41:05.148: INFO: Pod "pod-secrets-dca3aeb7-7368-45d9-a712-834a95e37b87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041516835s Sep 29 10:41:07.302: INFO: Pod "pod-secrets-dca3aeb7-7368-45d9-a712-834a95e37b87": Phase="Pending", Reason="", readiness=false. Elapsed: 4.195230808s Sep 29 10:41:09.310: INFO: Pod "pod-secrets-dca3aeb7-7368-45d9-a712-834a95e37b87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.202998314s STEP: Saw pod success Sep 29 10:41:09.310: INFO: Pod "pod-secrets-dca3aeb7-7368-45d9-a712-834a95e37b87" satisfied condition "Succeeded or Failed" Sep 29 10:41:09.314: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-dca3aeb7-7368-45d9-a712-834a95e37b87 container secret-env-test: STEP: delete the pod Sep 29 10:41:09.353: INFO: Waiting for pod pod-secrets-dca3aeb7-7368-45d9-a712-834a95e37b87 to disappear Sep 29 10:41:09.370: INFO: Pod pod-secrets-dca3aeb7-7368-45d9-a712-834a95e37b87 no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:41:09.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4839" for this suite. • [SLOW TEST:6.408 seconds] [sig-api-machinery] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:36 should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":303,"completed":29,"skipped":447,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:41:09.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Sep 29 10:41:13.545: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:41:13.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-36" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":303,"completed":30,"skipped":478,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:41:13.606: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-5a61c581-d88b-4abc-bc85-d99bb2ca833a STEP: Creating a pod to test consume configMaps Sep 29 10:41:13.866: INFO: Waiting up to 5m0s for pod "pod-configmaps-1837cb30-6124-4425-b51b-cb1bc0ff8f17" in namespace "configmap-6388" to be "Succeeded or Failed" Sep 29 10:41:13.873: INFO: Pod "pod-configmaps-1837cb30-6124-4425-b51b-cb1bc0ff8f17": Phase="Pending", Reason="", readiness=false. Elapsed: 6.777445ms Sep 29 10:41:15.877: INFO: Pod "pod-configmaps-1837cb30-6124-4425-b51b-cb1bc0ff8f17": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010571627s Sep 29 10:41:17.881: INFO: Pod "pod-configmaps-1837cb30-6124-4425-b51b-cb1bc0ff8f17": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014771473s STEP: Saw pod success Sep 29 10:41:17.881: INFO: Pod "pod-configmaps-1837cb30-6124-4425-b51b-cb1bc0ff8f17" satisfied condition "Succeeded or Failed" Sep 29 10:41:17.883: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-1837cb30-6124-4425-b51b-cb1bc0ff8f17 container configmap-volume-test: STEP: delete the pod Sep 29 10:41:18.010: INFO: Waiting for pod pod-configmaps-1837cb30-6124-4425-b51b-cb1bc0ff8f17 to disappear Sep 29 10:41:18.044: INFO: Pod pod-configmaps-1837cb30-6124-4425-b51b-cb1bc0ff8f17 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:41:18.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6388" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":31,"skipped":492,"failed":0} SSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:41:18.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Sep 29 10:41:18.166: INFO: Waiting up to 5m0s for pod "downward-api-27386662-d56d-4b42-a6c3-c5ee7403ae1c" in namespace "downward-api-8948" to be "Succeeded or Failed" Sep 29 10:41:18.169: INFO: Pod "downward-api-27386662-d56d-4b42-a6c3-c5ee7403ae1c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.524554ms Sep 29 10:41:20.174: INFO: Pod "downward-api-27386662-d56d-4b42-a6c3-c5ee7403ae1c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007645027s Sep 29 10:41:22.181: INFO: Pod "downward-api-27386662-d56d-4b42-a6c3-c5ee7403ae1c": Phase="Running", Reason="", readiness=true. Elapsed: 4.015211425s Sep 29 10:41:24.193: INFO: Pod "downward-api-27386662-d56d-4b42-a6c3-c5ee7403ae1c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.026745835s STEP: Saw pod success Sep 29 10:41:24.193: INFO: Pod "downward-api-27386662-d56d-4b42-a6c3-c5ee7403ae1c" satisfied condition "Succeeded or Failed" Sep 29 10:41:24.195: INFO: Trying to get logs from node kali-worker2 pod downward-api-27386662-d56d-4b42-a6c3-c5ee7403ae1c container dapi-container: STEP: delete the pod Sep 29 10:41:24.232: INFO: Waiting for pod downward-api-27386662-d56d-4b42-a6c3-c5ee7403ae1c to disappear Sep 29 10:41:24.244: INFO: Pod downward-api-27386662-d56d-4b42-a6c3-c5ee7403ae1c no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:41:24.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8948" for this suite. • [SLOW TEST:6.200 seconds] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":303,"completed":32,"skipped":496,"failed":0} S ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Lease /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:41:24.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Lease /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:41:24.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-3357" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":303,"completed":33,"skipped":497,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:41:24.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 29 10:41:25.161: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 29 10:41:27.181: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736972885, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736972885, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736972885, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736972885, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 29 10:41:30.220: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:41:42.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1029" for this suite. STEP: Destroying namespace "webhook-1029-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.223 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":303,"completed":34,"skipped":498,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:41:42.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-cb8ebce5-e995-4c0f-8cdc-ec736a6c9726 STEP: Creating a pod to test consume configMaps Sep 29 10:41:42.717: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5cb6ff76-c4c5-4042-9b5f-65ab99613b8e" in namespace "projected-9951" to be "Succeeded or Failed" Sep 29 10:41:42.792: INFO: Pod "pod-projected-configmaps-5cb6ff76-c4c5-4042-9b5f-65ab99613b8e": Phase="Pending", Reason="", readiness=false. Elapsed: 74.391837ms Sep 29 10:41:44.796: INFO: Pod "pod-projected-configmaps-5cb6ff76-c4c5-4042-9b5f-65ab99613b8e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078425567s Sep 29 10:41:46.800: INFO: Pod "pod-projected-configmaps-5cb6ff76-c4c5-4042-9b5f-65ab99613b8e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.083103914s STEP: Saw pod success Sep 29 10:41:46.801: INFO: Pod "pod-projected-configmaps-5cb6ff76-c4c5-4042-9b5f-65ab99613b8e" satisfied condition "Succeeded or Failed" Sep 29 10:41:46.803: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-5cb6ff76-c4c5-4042-9b5f-65ab99613b8e container projected-configmap-volume-test: STEP: delete the pod Sep 29 10:41:47.196: INFO: Waiting for pod pod-projected-configmaps-5cb6ff76-c4c5-4042-9b5f-65ab99613b8e to disappear Sep 29 10:41:47.206: INFO: Pod pod-projected-configmaps-5cb6ff76-c4c5-4042-9b5f-65ab99613b8e no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:41:47.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9951" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":35,"skipped":510,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:41:47.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-0dcfc50a-6ad1-41cb-8de1-161451a5d8e7 STEP: Creating a pod to test consume secrets Sep 29 10:41:47.355: INFO: Waiting up to 5m0s for pod "pod-secrets-7f273af6-a5ae-4f96-8904-82a5410f8617" in namespace "secrets-9899" to be "Succeeded or Failed" Sep 29 10:41:47.368: INFO: Pod "pod-secrets-7f273af6-a5ae-4f96-8904-82a5410f8617": Phase="Pending", Reason="", readiness=false. Elapsed: 13.489848ms Sep 29 10:41:49.403: INFO: Pod "pod-secrets-7f273af6-a5ae-4f96-8904-82a5410f8617": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048210844s Sep 29 10:41:51.407: INFO: Pod "pod-secrets-7f273af6-a5ae-4f96-8904-82a5410f8617": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05263113s STEP: Saw pod success Sep 29 10:41:51.407: INFO: Pod "pod-secrets-7f273af6-a5ae-4f96-8904-82a5410f8617" satisfied condition "Succeeded or Failed" Sep 29 10:41:51.410: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-7f273af6-a5ae-4f96-8904-82a5410f8617 container secret-volume-test: STEP: delete the pod Sep 29 10:41:51.477: INFO: Waiting for pod pod-secrets-7f273af6-a5ae-4f96-8904-82a5410f8617 to disappear Sep 29 10:41:51.486: INFO: Pod pod-secrets-7f273af6-a5ae-4f96-8904-82a5410f8617 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:41:51.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9899" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":36,"skipped":518,"failed":0} SSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:41:51.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap that has name configmap-test-emptyKey-b196fcc2-969b-4810-8b07-f385f6484010 [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:41:51.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5371" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":303,"completed":37,"skipped":521,"failed":0} SSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:41:51.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Sep 29 10:41:51.773: INFO: Waiting up to 5m0s for pod "downward-api-051d59af-b86e-4142-a74c-4f33a613948a" in namespace "downward-api-500" to be "Succeeded or Failed" Sep 29 10:41:51.791: INFO: Pod "downward-api-051d59af-b86e-4142-a74c-4f33a613948a": Phase="Pending", Reason="", readiness=false. Elapsed: 18.299996ms Sep 29 10:41:53.894: INFO: Pod "downward-api-051d59af-b86e-4142-a74c-4f33a613948a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.121058059s Sep 29 10:41:55.899: INFO: Pod "downward-api-051d59af-b86e-4142-a74c-4f33a613948a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.125646301s STEP: Saw pod success Sep 29 10:41:55.899: INFO: Pod "downward-api-051d59af-b86e-4142-a74c-4f33a613948a" satisfied condition "Succeeded or Failed" Sep 29 10:41:55.902: INFO: Trying to get logs from node kali-worker pod downward-api-051d59af-b86e-4142-a74c-4f33a613948a container dapi-container: STEP: delete the pod Sep 29 10:41:55.951: INFO: Waiting for pod downward-api-051d59af-b86e-4142-a74c-4f33a613948a to disappear Sep 29 10:41:55.965: INFO: Pod downward-api-051d59af-b86e-4142-a74c-4f33a613948a no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:41:55.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-500" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":303,"completed":38,"skipped":529,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:41:55.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6045.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-6045.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6045.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6045.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6045.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-6045.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6045.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-6045.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6045.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6045.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-6045.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6045.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-6045.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6045.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-6045.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6045.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-6045.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6045.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 29 10:42:02.190: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6045.svc.cluster.local from pod dns-6045/dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f: the server could not find the requested resource (get pods dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f) Sep 29 10:42:02.194: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6045.svc.cluster.local from pod dns-6045/dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f: the server could not find the requested resource (get pods dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f) Sep 29 10:42:02.197: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6045.svc.cluster.local from pod dns-6045/dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f: the server could not find the requested resource (get pods dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f) Sep 29 10:42:02.201: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6045.svc.cluster.local from pod dns-6045/dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f: the server could not find the requested resource (get pods dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f) Sep 29 10:42:02.210: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6045.svc.cluster.local from pod dns-6045/dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f: the server could not find the requested resource (get pods dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f) Sep 29 10:42:02.214: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6045.svc.cluster.local from pod dns-6045/dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f: the server could not find the requested resource (get pods dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f) Sep 29 10:42:02.217: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6045.svc.cluster.local from pod dns-6045/dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f: the server could not find the requested resource (get pods dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f) Sep 29 10:42:02.220: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6045.svc.cluster.local from pod dns-6045/dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f: the server could not find the requested resource (get pods dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f) Sep 29 10:42:02.227: INFO: Lookups using dns-6045/dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6045.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6045.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6045.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6045.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6045.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6045.svc.cluster.local jessie_udp@dns-test-service-2.dns-6045.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6045.svc.cluster.local] Sep 29 10:42:07.232: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6045.svc.cluster.local from pod dns-6045/dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f: the server could not find the requested resource (get pods dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f) Sep 29 10:42:07.235: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6045.svc.cluster.local from pod dns-6045/dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f: the server could not find the requested resource (get pods dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f) Sep 29 10:42:07.238: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6045.svc.cluster.local from pod dns-6045/dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f: the server could not find the requested resource (get pods dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f) Sep 29 10:42:07.241: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6045.svc.cluster.local from pod dns-6045/dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f: the server could not find the requested resource (get pods dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f) Sep 29 10:42:07.251: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6045.svc.cluster.local from pod dns-6045/dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f: the server could not find the requested resource (get pods dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f) Sep 29 10:42:07.254: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6045.svc.cluster.local from pod dns-6045/dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f: the server could not find the requested resource (get pods dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f) Sep 29 10:42:07.257: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6045.svc.cluster.local from pod dns-6045/dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f: the server could not find the requested resource (get pods dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f) Sep 29 10:42:07.260: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6045.svc.cluster.local from pod dns-6045/dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f: the server could not find the requested resource (get pods dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f) Sep 29 10:42:07.266: INFO: Lookups using dns-6045/dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6045.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6045.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6045.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6045.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6045.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6045.svc.cluster.local jessie_udp@dns-test-service-2.dns-6045.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6045.svc.cluster.local] Sep 29 10:42:12.232: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6045.svc.cluster.local from pod dns-6045/dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f: the server could not find the requested resource (get pods dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f) Sep 29 10:42:12.236: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6045.svc.cluster.local from pod dns-6045/dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f: the server could not find the requested resource (get pods dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f) Sep 29 10:42:12.240: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6045.svc.cluster.local from pod dns-6045/dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f: the server could not find the requested resource (get pods dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f) Sep 29 10:42:12.244: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6045.svc.cluster.local from pod dns-6045/dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f: the server could not find the requested resource (get pods dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f) Sep 29 10:42:12.254: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6045.svc.cluster.local from pod dns-6045/dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f: the server could not find the requested resource (get pods dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f) Sep 29 10:42:12.257: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6045.svc.cluster.local from pod dns-6045/dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f: the server could not find the requested resource (get pods dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f) Sep 29 10:42:12.261: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6045.svc.cluster.local from pod dns-6045/dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f: the server could not find the requested resource (get pods dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f) Sep 29 10:42:12.264: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6045.svc.cluster.local from pod dns-6045/dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f: the server could not find the requested resource (get pods dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f) Sep 29 10:42:12.270: INFO: Lookups using dns-6045/dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6045.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6045.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6045.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6045.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6045.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6045.svc.cluster.local jessie_udp@dns-test-service-2.dns-6045.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6045.svc.cluster.local] Sep 29 10:42:17.236: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6045.svc.cluster.local from pod dns-6045/dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f: the server could not find the requested resource (get pods dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f) Sep 29 10:42:17.240: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6045.svc.cluster.local from pod dns-6045/dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f: the server could not find the requested resource (get pods dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f) Sep 29 10:42:17.246: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6045.svc.cluster.local from pod dns-6045/dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f: the server could not find the requested resource (get pods dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f) Sep 29 10:42:17.250: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6045.svc.cluster.local from pod dns-6045/dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f: the server could not find the requested resource (get pods dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f) Sep 29 10:42:17.257: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6045.svc.cluster.local from pod dns-6045/dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f: the server could not find the requested resource (get pods dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f) Sep 29 10:42:17.260: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6045.svc.cluster.local from pod dns-6045/dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f: the server could not find the requested resource (get pods dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f) Sep 29 10:42:17.262: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6045.svc.cluster.local from pod dns-6045/dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f: the server could not find the requested resource (get pods dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f) Sep 29 10:42:17.264: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6045.svc.cluster.local from pod dns-6045/dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f: the server could not find the requested resource (get pods dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f) Sep 29 10:42:17.270: INFO: Lookups using dns-6045/dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6045.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6045.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6045.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6045.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6045.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6045.svc.cluster.local jessie_udp@dns-test-service-2.dns-6045.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6045.svc.cluster.local] Sep 29 10:42:22.232: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6045.svc.cluster.local from pod dns-6045/dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f: the server could not find the requested resource (get pods dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f) Sep 29 10:42:22.235: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6045.svc.cluster.local from pod dns-6045/dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f: the server could not find the requested resource (get pods dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f) Sep 29 10:42:22.238: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6045.svc.cluster.local from pod dns-6045/dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f: the server could not find the requested resource (get pods dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f) Sep 29 10:42:22.240: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6045.svc.cluster.local from pod dns-6045/dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f: the server could not find the requested resource (get pods dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f) Sep 29 10:42:22.249: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6045.svc.cluster.local from pod dns-6045/dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f: the server could not find the requested resource (get pods dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f) Sep 29 10:42:22.252: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6045.svc.cluster.local from pod dns-6045/dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f: the server could not find the requested resource (get pods dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f) Sep 29 10:42:22.255: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6045.svc.cluster.local from pod dns-6045/dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f: the server could not find the requested resource (get pods dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f) Sep 29 10:42:22.258: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6045.svc.cluster.local from pod dns-6045/dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f: the server could not find the requested resource (get pods dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f) Sep 29 10:42:22.265: INFO: Lookups using dns-6045/dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6045.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6045.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6045.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6045.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6045.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6045.svc.cluster.local jessie_udp@dns-test-service-2.dns-6045.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6045.svc.cluster.local] Sep 29 10:42:27.231: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6045.svc.cluster.local from pod dns-6045/dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f: the server could not find the requested resource (get pods dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f) Sep 29 10:42:27.234: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6045.svc.cluster.local from pod dns-6045/dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f: the server could not find the requested resource (get pods dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f) Sep 29 10:42:27.237: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6045.svc.cluster.local from pod dns-6045/dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f: the server could not find the requested resource (get pods dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f) Sep 29 10:42:27.239: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6045.svc.cluster.local from pod dns-6045/dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f: the server could not find the requested resource (get pods dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f) Sep 29 10:42:27.280: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6045.svc.cluster.local from pod dns-6045/dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f: the server could not find the requested resource (get pods dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f) Sep 29 10:42:27.284: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6045.svc.cluster.local from pod dns-6045/dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f: the server could not find the requested resource (get pods dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f) Sep 29 10:42:27.287: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6045.svc.cluster.local from pod dns-6045/dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f: the server could not find the requested resource (get pods dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f) Sep 29 10:42:27.290: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6045.svc.cluster.local from pod dns-6045/dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f: the server could not find the requested resource (get pods dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f) Sep 29 10:42:27.299: INFO: Lookups using dns-6045/dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6045.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6045.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6045.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6045.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6045.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6045.svc.cluster.local jessie_udp@dns-test-service-2.dns-6045.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6045.svc.cluster.local] Sep 29 10:42:32.266: INFO: DNS probes using dns-6045/dns-test-f4b8b380-e920-45ec-bb3f-2453b7ba031f succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:42:32.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6045" for this suite. • [SLOW TEST:36.927 seconds] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":303,"completed":39,"skipped":574,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:42:32.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl run pod /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1545 [It] should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Sep 29 10:42:32.976: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-3597' Sep 29 10:42:33.094: INFO: stderr: "" Sep 29 10:42:33.094: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1550 Sep 29 10:42:33.113: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-3597' Sep 29 10:42:35.230: INFO: stderr: "" Sep 29 10:42:35.230: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:42:35.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3597" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":303,"completed":40,"skipped":597,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:42:35.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should adopt matching pods on creation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:42:40.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4025" for this suite. • [SLOW TEST:5.625 seconds] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":303,"completed":41,"skipped":619,"failed":0} [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:42:40.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Sep 29 10:42:40.981: INFO: Waiting up to 1m0s for all nodes to be ready Sep 29 10:43:41.004: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create pods that use 2/3 of node resources. Sep 29 10:43:41.077: INFO: Created pod: pod0-sched-preemption-low-priority Sep 29 10:43:41.115: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a critical pod that use same resources as that of a lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:44:09.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-6457" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:88.443 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates lower priority pod preemption by critical pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":303,"completed":42,"skipped":619,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:44:09.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:44:22.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3431" for this suite. • [SLOW TEST:13.268 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":303,"completed":43,"skipped":629,"failed":0} [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:44:22.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-projected-lp95 STEP: Creating a pod to test atomic-volume-subpath Sep 29 10:44:22.700: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-lp95" in namespace "subpath-2671" to be "Succeeded or Failed" Sep 29 10:44:22.711: INFO: Pod "pod-subpath-test-projected-lp95": Phase="Pending", Reason="", readiness=false. Elapsed: 10.722928ms Sep 29 10:44:24.716: INFO: Pod "pod-subpath-test-projected-lp95": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015999876s Sep 29 10:44:26.720: INFO: Pod "pod-subpath-test-projected-lp95": Phase="Running", Reason="", readiness=true. Elapsed: 4.02044438s Sep 29 10:44:28.726: INFO: Pod "pod-subpath-test-projected-lp95": Phase="Running", Reason="", readiness=true. Elapsed: 6.026640061s Sep 29 10:44:30.731: INFO: Pod "pod-subpath-test-projected-lp95": Phase="Running", Reason="", readiness=true. Elapsed: 8.031042462s Sep 29 10:44:32.735: INFO: Pod "pod-subpath-test-projected-lp95": Phase="Running", Reason="", readiness=true. Elapsed: 10.035411956s Sep 29 10:44:34.740: INFO: Pod "pod-subpath-test-projected-lp95": Phase="Running", Reason="", readiness=true. Elapsed: 12.040168283s Sep 29 10:44:36.744: INFO: Pod "pod-subpath-test-projected-lp95": Phase="Running", Reason="", readiness=true. Elapsed: 14.044665247s Sep 29 10:44:38.748: INFO: Pod "pod-subpath-test-projected-lp95": Phase="Running", Reason="", readiness=true. Elapsed: 16.048430999s Sep 29 10:44:40.753: INFO: Pod "pod-subpath-test-projected-lp95": Phase="Running", Reason="", readiness=true. Elapsed: 18.052918783s Sep 29 10:44:42.758: INFO: Pod "pod-subpath-test-projected-lp95": Phase="Running", Reason="", readiness=true. Elapsed: 20.05806028s Sep 29 10:44:44.763: INFO: Pod "pod-subpath-test-projected-lp95": Phase="Running", Reason="", readiness=true. Elapsed: 22.063337936s Sep 29 10:44:46.768: INFO: Pod "pod-subpath-test-projected-lp95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.067814964s STEP: Saw pod success Sep 29 10:44:46.768: INFO: Pod "pod-subpath-test-projected-lp95" satisfied condition "Succeeded or Failed" Sep 29 10:44:46.771: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-projected-lp95 container test-container-subpath-projected-lp95: STEP: delete the pod Sep 29 10:44:46.838: INFO: Waiting for pod pod-subpath-test-projected-lp95 to disappear Sep 29 10:44:46.859: INFO: Pod pod-subpath-test-projected-lp95 no longer exists STEP: Deleting pod pod-subpath-test-projected-lp95 Sep 29 10:44:46.859: INFO: Deleting pod "pod-subpath-test-projected-lp95" in namespace "subpath-2671" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:44:46.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2671" for this suite. • [SLOW TEST:24.271 seconds] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":303,"completed":44,"skipped":629,"failed":0} SSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] version v1 /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:44:46.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-r5sbl in namespace proxy-6977 I0929 10:44:46.981846 7 runners.go:190] Created replication controller with name: proxy-service-r5sbl, namespace: proxy-6977, replica count: 1 I0929 10:44:48.032315 7 runners.go:190] proxy-service-r5sbl Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0929 10:44:49.032555 7 runners.go:190] proxy-service-r5sbl Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0929 10:44:50.033007 7 runners.go:190] proxy-service-r5sbl Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0929 10:44:51.033270 7 runners.go:190] proxy-service-r5sbl Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0929 10:44:52.033557 7 runners.go:190] proxy-service-r5sbl Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 29 10:44:52.045: INFO: setup took 5.132060265s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Sep 29 10:44:52.053: INFO: (0) /api/v1/namespaces/proxy-6977/services/proxy-service-r5sbl:portname1/proxy/: foo (200; 8.662643ms) Sep 29 10:44:52.054: INFO: (0) /api/v1/namespaces/proxy-6977/pods/proxy-service-r5sbl-mn8hr:162/proxy/: bar (200; 9.421851ms) Sep 29 10:44:52.054: INFO: (0) /api/v1/namespaces/proxy-6977/pods/http:proxy-service-r5sbl-mn8hr:162/proxy/: bar (200; 9.457473ms) Sep 29 10:44:52.054: INFO: (0) /api/v1/namespaces/proxy-6977/services/proxy-service-r5sbl:portname2/proxy/: bar (200; 9.354419ms) Sep 29 10:44:52.054: INFO: (0) /api/v1/namespaces/proxy-6977/pods/proxy-service-r5sbl-mn8hr:160/proxy/: foo (200; 9.373158ms) Sep 29 10:44:52.054: INFO: (0) /api/v1/namespaces/proxy-6977/pods/proxy-service-r5sbl-mn8hr/proxy/: test (200; 9.482077ms) Sep 29 10:44:52.055: INFO: (0) /api/v1/namespaces/proxy-6977/services/http:proxy-service-r5sbl:portname1/proxy/: foo (200; 9.717635ms) Sep 29 10:44:52.055: INFO: (0) /api/v1/namespaces/proxy-6977/services/http:proxy-service-r5sbl:portname2/proxy/: bar (200; 9.535973ms) Sep 29 10:44:52.055: INFO: (0) /api/v1/namespaces/proxy-6977/pods/proxy-service-r5sbl-mn8hr:1080/proxy/: test<... (200; 9.772529ms) Sep 29 10:44:52.055: INFO: (0) /api/v1/namespaces/proxy-6977/pods/http:proxy-service-r5sbl-mn8hr:160/proxy/: foo (200; 9.763269ms) Sep 29 10:44:52.055: INFO: (0) /api/v1/namespaces/proxy-6977/pods/http:proxy-service-r5sbl-mn8hr:1080/proxy/: ... (200; 9.81904ms) Sep 29 10:44:52.060: INFO: (0) /api/v1/namespaces/proxy-6977/services/https:proxy-service-r5sbl:tlsportname1/proxy/: tls baz (200; 15.210143ms) Sep 29 10:44:52.060: INFO: (0) /api/v1/namespaces/proxy-6977/pods/https:proxy-service-r5sbl-mn8hr:462/proxy/: tls qux (200; 15.138084ms) Sep 29 10:44:52.060: INFO: (0) /api/v1/namespaces/proxy-6977/services/https:proxy-service-r5sbl:tlsportname2/proxy/: tls qux (200; 15.206131ms) Sep 29 10:44:52.060: INFO: (0) /api/v1/namespaces/proxy-6977/pods/https:proxy-service-r5sbl-mn8hr:460/proxy/: tls baz (200; 15.270709ms) Sep 29 10:44:52.062: INFO: (0) /api/v1/namespaces/proxy-6977/pods/https:proxy-service-r5sbl-mn8hr:443/proxy/: test (200; 5.708832ms) Sep 29 10:44:52.067: INFO: (1) /api/v1/namespaces/proxy-6977/services/http:proxy-service-r5sbl:portname1/proxy/: foo (200; 5.680645ms) Sep 29 10:44:52.067: INFO: (1) /api/v1/namespaces/proxy-6977/pods/https:proxy-service-r5sbl-mn8hr:460/proxy/: tls baz (200; 5.668121ms) Sep 29 10:44:52.067: INFO: (1) /api/v1/namespaces/proxy-6977/pods/proxy-service-r5sbl-mn8hr:1080/proxy/: test<... (200; 5.711956ms) Sep 29 10:44:52.067: INFO: (1) /api/v1/namespaces/proxy-6977/services/proxy-service-r5sbl:portname2/proxy/: bar (200; 5.752664ms) Sep 29 10:44:52.067: INFO: (1) /api/v1/namespaces/proxy-6977/pods/http:proxy-service-r5sbl-mn8hr:1080/proxy/: ... (200; 5.749557ms) Sep 29 10:44:52.067: INFO: (1) /api/v1/namespaces/proxy-6977/pods/proxy-service-r5sbl-mn8hr:160/proxy/: foo (200; 5.803392ms) Sep 29 10:44:52.067: INFO: (1) /api/v1/namespaces/proxy-6977/services/http:proxy-service-r5sbl:portname2/proxy/: bar (200; 5.839673ms) Sep 29 10:44:52.073: INFO: (2) /api/v1/namespaces/proxy-6977/pods/proxy-service-r5sbl-mn8hr:162/proxy/: bar (200; 5.383239ms) Sep 29 10:44:52.073: INFO: (2) /api/v1/namespaces/proxy-6977/pods/proxy-service-r5sbl-mn8hr:160/proxy/: foo (200; 5.520363ms) Sep 29 10:44:52.073: INFO: (2) /api/v1/namespaces/proxy-6977/pods/https:proxy-service-r5sbl-mn8hr:460/proxy/: tls baz (200; 5.561762ms) Sep 29 10:44:52.074: INFO: (2) /api/v1/namespaces/proxy-6977/pods/http:proxy-service-r5sbl-mn8hr:162/proxy/: bar (200; 6.307095ms) Sep 29 10:44:52.074: INFO: (2) /api/v1/namespaces/proxy-6977/pods/http:proxy-service-r5sbl-mn8hr:1080/proxy/: ... (200; 6.353991ms) Sep 29 10:44:52.074: INFO: (2) /api/v1/namespaces/proxy-6977/pods/http:proxy-service-r5sbl-mn8hr:160/proxy/: foo (200; 6.384788ms) Sep 29 10:44:52.074: INFO: (2) /api/v1/namespaces/proxy-6977/pods/proxy-service-r5sbl-mn8hr:1080/proxy/: test<... (200; 6.407394ms) Sep 29 10:44:52.074: INFO: (2) /api/v1/namespaces/proxy-6977/pods/https:proxy-service-r5sbl-mn8hr:462/proxy/: tls qux (200; 6.383038ms) Sep 29 10:44:52.074: INFO: (2) /api/v1/namespaces/proxy-6977/pods/proxy-service-r5sbl-mn8hr/proxy/: test (200; 6.442482ms) Sep 29 10:44:52.074: INFO: (2) /api/v1/namespaces/proxy-6977/pods/https:proxy-service-r5sbl-mn8hr:443/proxy/: test<... (200; 6.020876ms) Sep 29 10:44:52.082: INFO: (3) /api/v1/namespaces/proxy-6977/pods/http:proxy-service-r5sbl-mn8hr:162/proxy/: bar (200; 7.183756ms) Sep 29 10:44:52.082: INFO: (3) /api/v1/namespaces/proxy-6977/pods/http:proxy-service-r5sbl-mn8hr:160/proxy/: foo (200; 7.111259ms) Sep 29 10:44:52.082: INFO: (3) /api/v1/namespaces/proxy-6977/pods/https:proxy-service-r5sbl-mn8hr:460/proxy/: tls baz (200; 7.25784ms) Sep 29 10:44:52.082: INFO: (3) /api/v1/namespaces/proxy-6977/pods/http:proxy-service-r5sbl-mn8hr:1080/proxy/: ... (200; 7.224078ms) Sep 29 10:44:52.082: INFO: (3) /api/v1/namespaces/proxy-6977/pods/https:proxy-service-r5sbl-mn8hr:443/proxy/: test (200; 7.423571ms) Sep 29 10:44:52.084: INFO: (3) /api/v1/namespaces/proxy-6977/services/proxy-service-r5sbl:portname1/proxy/: foo (200; 8.862268ms) Sep 29 10:44:52.084: INFO: (3) /api/v1/namespaces/proxy-6977/services/https:proxy-service-r5sbl:tlsportname2/proxy/: tls qux (200; 8.868ms) Sep 29 10:44:52.084: INFO: (3) /api/v1/namespaces/proxy-6977/services/https:proxy-service-r5sbl:tlsportname1/proxy/: tls baz (200; 8.856493ms) Sep 29 10:44:52.084: INFO: (3) /api/v1/namespaces/proxy-6977/services/http:proxy-service-r5sbl:portname2/proxy/: bar (200; 8.924476ms) Sep 29 10:44:52.084: INFO: (3) /api/v1/namespaces/proxy-6977/services/http:proxy-service-r5sbl:portname1/proxy/: foo (200; 8.960275ms) Sep 29 10:44:52.088: INFO: (4) /api/v1/namespaces/proxy-6977/pods/https:proxy-service-r5sbl-mn8hr:462/proxy/: tls qux (200; 3.818406ms) Sep 29 10:44:52.088: INFO: (4) /api/v1/namespaces/proxy-6977/pods/https:proxy-service-r5sbl-mn8hr:443/proxy/: ... (200; 4.05585ms) Sep 29 10:44:52.088: INFO: (4) /api/v1/namespaces/proxy-6977/pods/proxy-service-r5sbl-mn8hr:162/proxy/: bar (200; 4.045226ms) Sep 29 10:44:52.088: INFO: (4) /api/v1/namespaces/proxy-6977/pods/https:proxy-service-r5sbl-mn8hr:460/proxy/: tls baz (200; 3.981421ms) Sep 29 10:44:52.088: INFO: (4) /api/v1/namespaces/proxy-6977/pods/proxy-service-r5sbl-mn8hr:160/proxy/: foo (200; 4.151233ms) Sep 29 10:44:52.088: INFO: (4) /api/v1/namespaces/proxy-6977/pods/http:proxy-service-r5sbl-mn8hr:162/proxy/: bar (200; 4.117429ms) Sep 29 10:44:52.088: INFO: (4) /api/v1/namespaces/proxy-6977/pods/proxy-service-r5sbl-mn8hr:1080/proxy/: test<... (200; 4.083093ms) Sep 29 10:44:52.088: INFO: (4) /api/v1/namespaces/proxy-6977/pods/proxy-service-r5sbl-mn8hr/proxy/: test (200; 4.048649ms) Sep 29 10:44:52.093: INFO: (4) /api/v1/namespaces/proxy-6977/services/http:proxy-service-r5sbl:portname1/proxy/: foo (200; 9.239035ms) Sep 29 10:44:52.093: INFO: (4) /api/v1/namespaces/proxy-6977/services/https:proxy-service-r5sbl:tlsportname2/proxy/: tls qux (200; 9.250876ms) Sep 29 10:44:52.094: INFO: (4) /api/v1/namespaces/proxy-6977/services/https:proxy-service-r5sbl:tlsportname1/proxy/: tls baz (200; 9.991113ms) Sep 29 10:44:52.094: INFO: (4) /api/v1/namespaces/proxy-6977/services/proxy-service-r5sbl:portname1/proxy/: foo (200; 10.016144ms) Sep 29 10:44:52.094: INFO: (4) /api/v1/namespaces/proxy-6977/services/proxy-service-r5sbl:portname2/proxy/: bar (200; 9.923632ms) Sep 29 10:44:52.094: INFO: (4) /api/v1/namespaces/proxy-6977/services/http:proxy-service-r5sbl:portname2/proxy/: bar (200; 10.049871ms) Sep 29 10:44:52.099: INFO: (5) /api/v1/namespaces/proxy-6977/pods/proxy-service-r5sbl-mn8hr:162/proxy/: bar (200; 5.016934ms) Sep 29 10:44:52.099: INFO: (5) /api/v1/namespaces/proxy-6977/services/https:proxy-service-r5sbl:tlsportname2/proxy/: tls qux (200; 5.12328ms) Sep 29 10:44:52.099: INFO: (5) /api/v1/namespaces/proxy-6977/services/proxy-service-r5sbl:portname2/proxy/: bar (200; 5.108377ms) Sep 29 10:44:52.099: INFO: (5) /api/v1/namespaces/proxy-6977/pods/proxy-service-r5sbl-mn8hr:1080/proxy/: test<... (200; 5.115821ms) Sep 29 10:44:52.099: INFO: (5) /api/v1/namespaces/proxy-6977/services/http:proxy-service-r5sbl:portname2/proxy/: bar (200; 5.160045ms) Sep 29 10:44:52.100: INFO: (5) /api/v1/namespaces/proxy-6977/pods/https:proxy-service-r5sbl-mn8hr:460/proxy/: tls baz (200; 5.258504ms) Sep 29 10:44:52.100: INFO: (5) /api/v1/namespaces/proxy-6977/pods/https:proxy-service-r5sbl-mn8hr:443/proxy/: test (200; 5.313521ms) Sep 29 10:44:52.100: INFO: (5) /api/v1/namespaces/proxy-6977/services/proxy-service-r5sbl:portname1/proxy/: foo (200; 5.259672ms) Sep 29 10:44:52.100: INFO: (5) /api/v1/namespaces/proxy-6977/pods/proxy-service-r5sbl-mn8hr:160/proxy/: foo (200; 5.270808ms) Sep 29 10:44:52.100: INFO: (5) /api/v1/namespaces/proxy-6977/services/http:proxy-service-r5sbl:portname1/proxy/: foo (200; 5.194473ms) Sep 29 10:44:52.100: INFO: (5) /api/v1/namespaces/proxy-6977/pods/https:proxy-service-r5sbl-mn8hr:462/proxy/: tls qux (200; 5.320808ms) Sep 29 10:44:52.100: INFO: (5) /api/v1/namespaces/proxy-6977/pods/http:proxy-service-r5sbl-mn8hr:162/proxy/: bar (200; 5.252959ms) Sep 29 10:44:52.100: INFO: (5) /api/v1/namespaces/proxy-6977/pods/http:proxy-service-r5sbl-mn8hr:1080/proxy/: ... (200; 5.404987ms) Sep 29 10:44:52.103: INFO: (6) /api/v1/namespaces/proxy-6977/pods/https:proxy-service-r5sbl-mn8hr:462/proxy/: tls qux (200; 3.166915ms) Sep 29 10:44:52.103: INFO: (6) /api/v1/namespaces/proxy-6977/services/proxy-service-r5sbl:portname1/proxy/: foo (200; 3.261805ms) Sep 29 10:44:52.103: INFO: (6) /api/v1/namespaces/proxy-6977/pods/https:proxy-service-r5sbl-mn8hr:460/proxy/: tls baz (200; 3.156861ms) Sep 29 10:44:52.103: INFO: (6) /api/v1/namespaces/proxy-6977/pods/http:proxy-service-r5sbl-mn8hr:162/proxy/: bar (200; 3.511765ms) Sep 29 10:44:52.103: INFO: (6) /api/v1/namespaces/proxy-6977/pods/proxy-service-r5sbl-mn8hr/proxy/: test (200; 3.589788ms) Sep 29 10:44:52.105: INFO: (6) /api/v1/namespaces/proxy-6977/pods/proxy-service-r5sbl-mn8hr:162/proxy/: bar (200; 5.260261ms) Sep 29 10:44:52.105: INFO: (6) /api/v1/namespaces/proxy-6977/services/http:proxy-service-r5sbl:portname2/proxy/: bar (200; 5.302711ms) Sep 29 10:44:52.105: INFO: (6) /api/v1/namespaces/proxy-6977/pods/http:proxy-service-r5sbl-mn8hr:160/proxy/: foo (200; 5.341394ms) Sep 29 10:44:52.105: INFO: (6) /api/v1/namespaces/proxy-6977/pods/proxy-service-r5sbl-mn8hr:1080/proxy/: test<... (200; 5.439655ms) Sep 29 10:44:52.105: INFO: (6) /api/v1/namespaces/proxy-6977/services/http:proxy-service-r5sbl:portname1/proxy/: foo (200; 5.394304ms) Sep 29 10:44:52.105: INFO: (6) /api/v1/namespaces/proxy-6977/services/https:proxy-service-r5sbl:tlsportname1/proxy/: tls baz (200; 5.537426ms) Sep 29 10:44:52.105: INFO: (6) /api/v1/namespaces/proxy-6977/pods/https:proxy-service-r5sbl-mn8hr:443/proxy/: ... (200; 5.667695ms) Sep 29 10:44:52.106: INFO: (6) /api/v1/namespaces/proxy-6977/services/proxy-service-r5sbl:portname2/proxy/: bar (200; 5.659849ms) Sep 29 10:44:52.106: INFO: (6) /api/v1/namespaces/proxy-6977/services/https:proxy-service-r5sbl:tlsportname2/proxy/: tls qux (200; 5.712519ms) Sep 29 10:44:52.106: INFO: (6) /api/v1/namespaces/proxy-6977/pods/proxy-service-r5sbl-mn8hr:160/proxy/: foo (200; 5.737584ms) Sep 29 10:44:52.108: INFO: (7) /api/v1/namespaces/proxy-6977/pods/https:proxy-service-r5sbl-mn8hr:443/proxy/: test<... (200; 3.538106ms) Sep 29 10:44:52.109: INFO: (7) /api/v1/namespaces/proxy-6977/services/https:proxy-service-r5sbl:tlsportname1/proxy/: tls baz (200; 3.788993ms) Sep 29 10:44:52.110: INFO: (7) /api/v1/namespaces/proxy-6977/pods/http:proxy-service-r5sbl-mn8hr:162/proxy/: bar (200; 4.03071ms) Sep 29 10:44:52.110: INFO: (7) /api/v1/namespaces/proxy-6977/pods/http:proxy-service-r5sbl-mn8hr:1080/proxy/: ... (200; 4.068719ms) Sep 29 10:44:52.110: INFO: (7) /api/v1/namespaces/proxy-6977/pods/proxy-service-r5sbl-mn8hr/proxy/: test (200; 4.227435ms) Sep 29 10:44:52.110: INFO: (7) /api/v1/namespaces/proxy-6977/pods/proxy-service-r5sbl-mn8hr:160/proxy/: foo (200; 4.218909ms) Sep 29 10:44:52.110: INFO: (7) /api/v1/namespaces/proxy-6977/pods/proxy-service-r5sbl-mn8hr:162/proxy/: bar (200; 4.477716ms) Sep 29 10:44:52.110: INFO: (7) /api/v1/namespaces/proxy-6977/pods/https:proxy-service-r5sbl-mn8hr:460/proxy/: tls baz (200; 4.437959ms) Sep 29 10:44:52.110: INFO: (7) /api/v1/namespaces/proxy-6977/pods/http:proxy-service-r5sbl-mn8hr:160/proxy/: foo (200; 4.531676ms) Sep 29 10:44:52.111: INFO: (7) /api/v1/namespaces/proxy-6977/services/http:proxy-service-r5sbl:portname1/proxy/: foo (200; 5.270376ms) Sep 29 10:44:52.111: INFO: (7) /api/v1/namespaces/proxy-6977/services/http:proxy-service-r5sbl:portname2/proxy/: bar (200; 5.260355ms) Sep 29 10:44:52.111: INFO: (7) /api/v1/namespaces/proxy-6977/services/proxy-service-r5sbl:portname1/proxy/: foo (200; 5.28074ms) Sep 29 10:44:52.111: INFO: (7) /api/v1/namespaces/proxy-6977/services/https:proxy-service-r5sbl:tlsportname2/proxy/: tls qux (200; 5.316364ms) Sep 29 10:44:52.111: INFO: (7) /api/v1/namespaces/proxy-6977/services/proxy-service-r5sbl:portname2/proxy/: bar (200; 5.715778ms) Sep 29 10:44:52.115: INFO: (8) /api/v1/namespaces/proxy-6977/pods/http:proxy-service-r5sbl-mn8hr:1080/proxy/: ... (200; 3.290085ms) Sep 29 10:44:52.115: INFO: (8) /api/v1/namespaces/proxy-6977/pods/https:proxy-service-r5sbl-mn8hr:443/proxy/: test<... (200; 3.602943ms) Sep 29 10:44:52.115: INFO: (8) /api/v1/namespaces/proxy-6977/services/http:proxy-service-r5sbl:portname1/proxy/: foo (200; 3.666841ms) Sep 29 10:44:52.115: INFO: (8) /api/v1/namespaces/proxy-6977/pods/proxy-service-r5sbl-mn8hr:160/proxy/: foo (200; 3.670023ms) Sep 29 10:44:52.115: INFO: (8) /api/v1/namespaces/proxy-6977/pods/http:proxy-service-r5sbl-mn8hr:162/proxy/: bar (200; 3.63296ms) Sep 29 10:44:52.115: INFO: (8) /api/v1/namespaces/proxy-6977/services/https:proxy-service-r5sbl:tlsportname2/proxy/: tls qux (200; 3.738231ms) Sep 29 10:44:52.115: INFO: (8) /api/v1/namespaces/proxy-6977/pods/proxy-service-r5sbl-mn8hr/proxy/: test (200; 3.647446ms) Sep 29 10:44:52.115: INFO: (8) /api/v1/namespaces/proxy-6977/pods/proxy-service-r5sbl-mn8hr:162/proxy/: bar (200; 3.679552ms) Sep 29 10:44:52.115: INFO: (8) /api/v1/namespaces/proxy-6977/pods/https:proxy-service-r5sbl-mn8hr:462/proxy/: tls qux (200; 3.836644ms) Sep 29 10:44:52.115: INFO: (8) /api/v1/namespaces/proxy-6977/pods/https:proxy-service-r5sbl-mn8hr:460/proxy/: tls baz (200; 3.810859ms) Sep 29 10:44:52.115: INFO: (8) /api/v1/namespaces/proxy-6977/pods/http:proxy-service-r5sbl-mn8hr:160/proxy/: foo (200; 3.883464ms) Sep 29 10:44:52.120: INFO: (8) /api/v1/namespaces/proxy-6977/services/proxy-service-r5sbl:portname2/proxy/: bar (200; 8.344637ms) Sep 29 10:44:52.120: INFO: (8) /api/v1/namespaces/proxy-6977/services/http:proxy-service-r5sbl:portname2/proxy/: bar (200; 8.411885ms) Sep 29 10:44:52.120: INFO: (8) /api/v1/namespaces/proxy-6977/services/https:proxy-service-r5sbl:tlsportname1/proxy/: tls baz (200; 8.374279ms) Sep 29 10:44:52.120: INFO: (8) /api/v1/namespaces/proxy-6977/services/proxy-service-r5sbl:portname1/proxy/: foo (200; 8.347706ms) Sep 29 10:44:52.125: INFO: (9) /api/v1/namespaces/proxy-6977/pods/proxy-service-r5sbl-mn8hr:1080/proxy/: test<... (200; 4.711123ms) Sep 29 10:44:52.125: INFO: (9) /api/v1/namespaces/proxy-6977/pods/proxy-service-r5sbl-mn8hr:160/proxy/: foo (200; 5.083619ms) Sep 29 10:44:52.125: INFO: (9) /api/v1/namespaces/proxy-6977/pods/proxy-service-r5sbl-mn8hr:162/proxy/: bar (200; 5.345257ms) Sep 29 10:44:52.125: INFO: (9) /api/v1/namespaces/proxy-6977/pods/http:proxy-service-r5sbl-mn8hr:162/proxy/: bar (200; 5.389873ms) Sep 29 10:44:52.125: INFO: (9) /api/v1/namespaces/proxy-6977/pods/https:proxy-service-r5sbl-mn8hr:443/proxy/: test (200; 5.426535ms) Sep 29 10:44:52.125: INFO: (9) /api/v1/namespaces/proxy-6977/services/http:proxy-service-r5sbl:portname1/proxy/: foo (200; 5.506201ms) Sep 29 10:44:52.125: INFO: (9) /api/v1/namespaces/proxy-6977/pods/https:proxy-service-r5sbl-mn8hr:462/proxy/: tls qux (200; 5.503939ms) Sep 29 10:44:52.126: INFO: (9) /api/v1/namespaces/proxy-6977/pods/https:proxy-service-r5sbl-mn8hr:460/proxy/: tls baz (200; 5.894892ms) Sep 29 10:44:52.126: INFO: (9) /api/v1/namespaces/proxy-6977/pods/http:proxy-service-r5sbl-mn8hr:160/proxy/: foo (200; 5.941654ms) Sep 29 10:44:52.126: INFO: (9) /api/v1/namespaces/proxy-6977/pods/http:proxy-service-r5sbl-mn8hr:1080/proxy/: ... (200; 5.90811ms) Sep 29 10:44:52.132: INFO: (9) /api/v1/namespaces/proxy-6977/services/http:proxy-service-r5sbl:portname2/proxy/: bar (200; 12.248539ms) Sep 29 10:44:52.132: INFO: (9) /api/v1/namespaces/proxy-6977/services/https:proxy-service-r5sbl:tlsportname2/proxy/: tls qux (200; 12.243068ms) Sep 29 10:44:52.132: INFO: (9) /api/v1/namespaces/proxy-6977/services/proxy-service-r5sbl:portname1/proxy/: foo (200; 12.282735ms) Sep 29 10:44:52.133: INFO: (9) /api/v1/namespaces/proxy-6977/services/proxy-service-r5sbl:portname2/proxy/: bar (200; 12.916034ms) Sep 29 10:44:52.133: INFO: (9) /api/v1/namespaces/proxy-6977/services/https:proxy-service-r5sbl:tlsportname1/proxy/: tls baz (200; 12.987696ms) Sep 29 10:44:52.138: INFO: (10) /api/v1/namespaces/proxy-6977/services/proxy-service-r5sbl:portname1/proxy/: foo (200; 5.410437ms) Sep 29 10:44:52.139: INFO: (10) /api/v1/namespaces/proxy-6977/services/https:proxy-service-r5sbl:tlsportname2/proxy/: tls qux (200; 5.336379ms) Sep 29 10:44:52.139: INFO: (10) /api/v1/namespaces/proxy-6977/pods/proxy-service-r5sbl-mn8hr:160/proxy/: foo (200; 5.321528ms) Sep 29 10:44:52.139: INFO: (10) /api/v1/namespaces/proxy-6977/services/proxy-service-r5sbl:portname2/proxy/: bar (200; 5.375382ms) Sep 29 10:44:52.139: INFO: (10) /api/v1/namespaces/proxy-6977/pods/http:proxy-service-r5sbl-mn8hr:162/proxy/: bar (200; 5.439145ms) Sep 29 10:44:52.139: INFO: (10) /api/v1/namespaces/proxy-6977/services/http:proxy-service-r5sbl:portname2/proxy/: bar (200; 5.533399ms) Sep 29 10:44:52.139: INFO: (10) /api/v1/namespaces/proxy-6977/pods/https:proxy-service-r5sbl-mn8hr:443/proxy/: test (200; 5.561209ms) Sep 29 10:44:52.139: INFO: (10) /api/v1/namespaces/proxy-6977/services/http:proxy-service-r5sbl:portname1/proxy/: foo (200; 5.444566ms) Sep 29 10:44:52.139: INFO: (10) /api/v1/namespaces/proxy-6977/pods/http:proxy-service-r5sbl-mn8hr:160/proxy/: foo (200; 5.537424ms) Sep 29 10:44:52.139: INFO: (10) /api/v1/namespaces/proxy-6977/pods/proxy-service-r5sbl-mn8hr:1080/proxy/: test<... (200; 5.717588ms) Sep 29 10:44:52.139: INFO: (10) /api/v1/namespaces/proxy-6977/pods/http:proxy-service-r5sbl-mn8hr:1080/proxy/: ... (200; 5.807465ms) Sep 29 10:44:52.139: INFO: (10) /api/v1/namespaces/proxy-6977/pods/proxy-service-r5sbl-mn8hr:162/proxy/: bar (200; 5.869439ms) Sep 29 10:44:52.139: INFO: (10) /api/v1/namespaces/proxy-6977/pods/https:proxy-service-r5sbl-mn8hr:460/proxy/: tls baz (200; 5.986414ms) Sep 29 10:44:52.139: INFO: (10) /api/v1/namespaces/proxy-6977/pods/https:proxy-service-r5sbl-mn8hr:462/proxy/: tls qux (200; 6.059475ms) Sep 29 10:44:52.142: INFO: (11) /api/v1/namespaces/proxy-6977/pods/https:proxy-service-r5sbl-mn8hr:460/proxy/: tls baz (200; 2.704361ms) Sep 29 10:44:52.144: INFO: (11) /api/v1/namespaces/proxy-6977/pods/https:proxy-service-r5sbl-mn8hr:443/proxy/: ... (200; 4.404425ms) Sep 29 10:44:52.144: INFO: (11) /api/v1/namespaces/proxy-6977/pods/http:proxy-service-r5sbl-mn8hr:162/proxy/: bar (200; 4.496546ms) Sep 29 10:44:52.144: INFO: (11) /api/v1/namespaces/proxy-6977/services/http:proxy-service-r5sbl:portname2/proxy/: bar (200; 4.835678ms) Sep 29 10:44:52.144: INFO: (11) /api/v1/namespaces/proxy-6977/pods/proxy-service-r5sbl-mn8hr:162/proxy/: bar (200; 4.873923ms) Sep 29 10:44:52.144: INFO: (11) /api/v1/namespaces/proxy-6977/pods/https:proxy-service-r5sbl-mn8hr:462/proxy/: tls qux (200; 4.910118ms) Sep 29 10:44:52.145: INFO: (11) /api/v1/namespaces/proxy-6977/pods/proxy-service-r5sbl-mn8hr:1080/proxy/: test<... (200; 5.333021ms) Sep 29 10:44:52.145: INFO: (11) /api/v1/namespaces/proxy-6977/services/http:proxy-service-r5sbl:portname1/proxy/: foo (200; 5.683764ms) Sep 29 10:44:52.145: INFO: (11) /api/v1/namespaces/proxy-6977/services/https:proxy-service-r5sbl:tlsportname1/proxy/: tls baz (200; 5.652476ms) Sep 29 10:44:52.145: INFO: (11) /api/v1/namespaces/proxy-6977/services/https:proxy-service-r5sbl:tlsportname2/proxy/: tls qux (200; 5.727319ms) Sep 29 10:44:52.145: INFO: (11) /api/v1/namespaces/proxy-6977/pods/proxy-service-r5sbl-mn8hr:160/proxy/: foo (200; 5.552205ms) Sep 29 10:44:52.145: INFO: (11) /api/v1/namespaces/proxy-6977/pods/proxy-service-r5sbl-mn8hr/proxy/: test (200; 5.678928ms) Sep 29 10:44:52.145: INFO: (11) /api/v1/namespaces/proxy-6977/services/proxy-service-r5sbl:portname2/proxy/: bar (200; 5.684311ms) Sep 29 10:44:52.145: INFO: (11) /api/v1/namespaces/proxy-6977/pods/http:proxy-service-r5sbl-mn8hr:160/proxy/: foo (200; 5.965642ms) Sep 29 10:44:52.145: INFO: (11) /api/v1/namespaces/proxy-6977/services/proxy-service-r5sbl:portname1/proxy/: foo (200; 6.121037ms) Sep 29 10:44:52.166: INFO: (12) /api/v1/namespaces/proxy-6977/pods/proxy-service-r5sbl-mn8hr:160/proxy/: foo (200; 20.258335ms) Sep 29 10:44:52.166: INFO: (12) /api/v1/namespaces/proxy-6977/pods/http:proxy-service-r5sbl-mn8hr:1080/proxy/: ... (200; 20.20343ms) Sep 29 10:44:52.166: INFO: (12) /api/v1/namespaces/proxy-6977/pods/https:proxy-service-r5sbl-mn8hr:443/proxy/: test<... (200; 20.549715ms) Sep 29 10:44:52.166: INFO: (12) /api/v1/namespaces/proxy-6977/pods/http:proxy-service-r5sbl-mn8hr:162/proxy/: bar (200; 20.610758ms) Sep 29 10:44:52.166: INFO: (12) /api/v1/namespaces/proxy-6977/pods/proxy-service-r5sbl-mn8hr:162/proxy/: bar (200; 20.687892ms) Sep 29 10:44:52.166: INFO: (12) /api/v1/namespaces/proxy-6977/pods/https:proxy-service-r5sbl-mn8hr:460/proxy/: tls baz (200; 20.684348ms) Sep 29 10:44:52.167: INFO: (12) /api/v1/namespaces/proxy-6977/services/https:proxy-service-r5sbl:tlsportname1/proxy/: tls baz (200; 21.179503ms) Sep 29 10:44:52.167: INFO: (12) /api/v1/namespaces/proxy-6977/pods/proxy-service-r5sbl-mn8hr/proxy/: test (200; 21.429212ms) Sep 29 10:44:52.167: INFO: (12) /api/v1/namespaces/proxy-6977/services/https:proxy-service-r5sbl:tlsportname2/proxy/: tls qux (200; 21.490661ms) Sep 29 10:44:52.167: INFO: (12) /api/v1/namespaces/proxy-6977/services/http:proxy-service-r5sbl:portname2/proxy/: bar (200; 21.491259ms) Sep 29 10:44:52.167: INFO: (12) /api/v1/namespaces/proxy-6977/services/proxy-service-r5sbl:portname2/proxy/: bar (200; 21.435101ms) Sep 29 10:44:52.167: INFO: (12) /api/v1/namespaces/proxy-6977/services/http:proxy-service-r5sbl:portname1/proxy/: foo (200; 21.529008ms) Sep 29 10:44:52.167: INFO: (12) /api/v1/namespaces/proxy-6977/services/proxy-service-r5sbl:portname1/proxy/: foo (200; 21.607611ms) Sep 29 10:44:52.171: INFO: (13) /api/v1/namespaces/proxy-6977/pods/https:proxy-service-r5sbl-mn8hr:443/proxy/: test (200; 4.612435ms) Sep 29 10:44:52.172: INFO: (13) /api/v1/namespaces/proxy-6977/pods/http:proxy-service-r5sbl-mn8hr:1080/proxy/: ... (200; 4.609407ms) Sep 29 10:44:52.172: INFO: (13) /api/v1/namespaces/proxy-6977/pods/proxy-service-r5sbl-mn8hr:1080/proxy/: test<... (200; 4.670263ms) Sep 29 10:44:52.172: INFO: (13) /api/v1/namespaces/proxy-6977/services/proxy-service-r5sbl:portname2/proxy/: bar (200; 5.243562ms) Sep 29 10:44:52.172: INFO: (13) /api/v1/namespaces/proxy-6977/pods/https:proxy-service-r5sbl-mn8hr:462/proxy/: tls qux (200; 5.182897ms) Sep 29 10:44:52.173: INFO: (13) /api/v1/namespaces/proxy-6977/pods/proxy-service-r5sbl-mn8hr:162/proxy/: bar (200; 5.398889ms) Sep 29 10:44:52.173: INFO: (13) /api/v1/namespaces/proxy-6977/pods/http:proxy-service-r5sbl-mn8hr:162/proxy/: bar (200; 5.365371ms) Sep 29 10:44:52.173: INFO: (13) /api/v1/namespaces/proxy-6977/services/http:proxy-service-r5sbl:portname1/proxy/: foo (200; 6.030734ms) Sep 29 10:44:52.174: INFO: (13) /api/v1/namespaces/proxy-6977/services/http:proxy-service-r5sbl:portname2/proxy/: bar (200; 6.56655ms) Sep 29 10:44:52.174: INFO: (13) /api/v1/namespaces/proxy-6977/services/https:proxy-service-r5sbl:tlsportname1/proxy/: tls baz (200; 6.563701ms) Sep 29 10:44:52.174: INFO: (13) /api/v1/namespaces/proxy-6977/services/proxy-service-r5sbl:portname1/proxy/: foo (200; 6.50851ms) Sep 29 10:44:52.174: INFO: (13) /api/v1/namespaces/proxy-6977/services/https:proxy-service-r5sbl:tlsportname2/proxy/: tls qux (200; 6.438614ms) Sep 29 10:44:52.178: INFO: (14) /api/v1/namespaces/proxy-6977/pods/http:proxy-service-r5sbl-mn8hr:162/proxy/: bar (200; 4.218046ms) Sep 29 10:44:52.178: INFO: (14) /api/v1/namespaces/proxy-6977/pods/https:proxy-service-r5sbl-mn8hr:443/proxy/: test (200; 4.299017ms) Sep 29 10:44:52.178: INFO: (14) /api/v1/namespaces/proxy-6977/pods/proxy-service-r5sbl-mn8hr:1080/proxy/: test<... (200; 4.357825ms) Sep 29 10:44:52.179: INFO: (14) /api/v1/namespaces/proxy-6977/pods/https:proxy-service-r5sbl-mn8hr:462/proxy/: tls qux (200; 4.648783ms) Sep 29 10:44:52.179: INFO: (14) /api/v1/namespaces/proxy-6977/pods/http:proxy-service-r5sbl-mn8hr:1080/proxy/: ... (200; 4.671951ms) Sep 29 10:44:52.179: INFO: (14) /api/v1/namespaces/proxy-6977/pods/https:proxy-service-r5sbl-mn8hr:460/proxy/: tls baz (200; 4.60075ms) Sep 29 10:44:52.179: INFO: (14) /api/v1/namespaces/proxy-6977/pods/proxy-service-r5sbl-mn8hr:160/proxy/: foo (200; 4.879232ms) Sep 29 10:44:52.179: INFO: (14) /api/v1/namespaces/proxy-6977/services/http:proxy-service-r5sbl:portname1/proxy/: foo (200; 4.853357ms) Sep 29 10:44:52.179: INFO: (14) /api/v1/namespaces/proxy-6977/pods/http:proxy-service-r5sbl-mn8hr:160/proxy/: foo (200; 5.004021ms) Sep 29 10:44:52.180: INFO: (14) /api/v1/namespaces/proxy-6977/services/http:proxy-service-r5sbl:portname2/proxy/: bar (200; 6.271122ms) Sep 29 10:44:52.180: INFO: (14) /api/v1/namespaces/proxy-6977/services/https:proxy-service-r5sbl:tlsportname1/proxy/: tls baz (200; 6.280598ms) Sep 29 10:44:52.180: INFO: (14) /api/v1/namespaces/proxy-6977/services/proxy-service-r5sbl:portname1/proxy/: foo (200; 6.349632ms) Sep 29 10:44:52.180: INFO: (14) /api/v1/namespaces/proxy-6977/services/https:proxy-service-r5sbl:tlsportname2/proxy/: tls qux (200; 6.440493ms) Sep 29 10:44:52.181: INFO: (14) /api/v1/namespaces/proxy-6977/services/proxy-service-r5sbl:portname2/proxy/: bar (200; 6.693525ms) Sep 29 10:44:52.181: INFO: (14) /api/v1/namespaces/proxy-6977/pods/proxy-service-r5sbl-mn8hr:162/proxy/: bar (200; 6.71593ms) Sep 29 10:44:52.184: INFO: (15) /api/v1/namespaces/proxy-6977/pods/proxy-service-r5sbl-mn8hr/proxy/: test (200; 2.922642ms) Sep 29 10:44:52.184: INFO: (15) /api/v1/namespaces/proxy-6977/pods/http:proxy-service-r5sbl-mn8hr:162/proxy/: bar (200; 3.212885ms) Sep 29 10:44:52.184: INFO: (15) /api/v1/namespaces/proxy-6977/pods/https:proxy-service-r5sbl-mn8hr:460/proxy/: tls baz (200; 3.266559ms) Sep 29 10:44:52.184: INFO: (15) /api/v1/namespaces/proxy-6977/pods/proxy-service-r5sbl-mn8hr:162/proxy/: bar (200; 3.27992ms) Sep 29 10:44:52.184: INFO: (15) /api/v1/namespaces/proxy-6977/pods/https:proxy-service-r5sbl-mn8hr:462/proxy/: tls qux (200; 3.526909ms) Sep 29 10:44:52.184: INFO: (15) /api/v1/namespaces/proxy-6977/pods/proxy-service-r5sbl-mn8hr:160/proxy/: foo (200; 3.539027ms) Sep 29 10:44:52.184: INFO: (15) /api/v1/namespaces/proxy-6977/pods/http:proxy-service-r5sbl-mn8hr:160/proxy/: foo (200; 3.508437ms) Sep 29 10:44:52.184: INFO: (15) /api/v1/namespaces/proxy-6977/pods/proxy-service-r5sbl-mn8hr:1080/proxy/: test<... (200; 3.583719ms) Sep 29 10:44:52.184: INFO: (15) /api/v1/namespaces/proxy-6977/pods/http:proxy-service-r5sbl-mn8hr:1080/proxy/: ... (200; 3.583192ms) Sep 29 10:44:52.184: INFO: (15) /api/v1/namespaces/proxy-6977/pods/https:proxy-service-r5sbl-mn8hr:443/proxy/: ... (200; 5.375642ms) Sep 29 10:44:52.191: INFO: (16) /api/v1/namespaces/proxy-6977/pods/https:proxy-service-r5sbl-mn8hr:462/proxy/: tls qux (200; 5.776154ms) Sep 29 10:44:52.192: INFO: (16) /api/v1/namespaces/proxy-6977/pods/http:proxy-service-r5sbl-mn8hr:162/proxy/: bar (200; 6.491676ms) Sep 29 10:44:52.192: INFO: (16) /api/v1/namespaces/proxy-6977/pods/https:proxy-service-r5sbl-mn8hr:443/proxy/: test (200; 8.002555ms) Sep 29 10:44:52.193: INFO: (16) /api/v1/namespaces/proxy-6977/pods/proxy-service-r5sbl-mn8hr:1080/proxy/: test<... (200; 8.001842ms) Sep 29 10:44:52.193: INFO: (16) /api/v1/namespaces/proxy-6977/pods/https:proxy-service-r5sbl-mn8hr:460/proxy/: tls baz (200; 8.014385ms) Sep 29 10:44:52.193: INFO: (16) /api/v1/namespaces/proxy-6977/pods/http:proxy-service-r5sbl-mn8hr:160/proxy/: foo (200; 8.006623ms) Sep 29 10:44:52.193: INFO: (16) /api/v1/namespaces/proxy-6977/pods/proxy-service-r5sbl-mn8hr:162/proxy/: bar (200; 8.263378ms) Sep 29 10:44:52.194: INFO: (16) /api/v1/namespaces/proxy-6977/services/https:proxy-service-r5sbl:tlsportname2/proxy/: tls qux (200; 8.720437ms) Sep 29 10:44:52.194: INFO: (16) /api/v1/namespaces/proxy-6977/services/https:proxy-service-r5sbl:tlsportname1/proxy/: tls baz (200; 8.799971ms) Sep 29 10:44:52.194: INFO: (16) /api/v1/namespaces/proxy-6977/services/http:proxy-service-r5sbl:portname1/proxy/: foo (200; 8.867982ms) Sep 29 10:44:52.194: INFO: (16) /api/v1/namespaces/proxy-6977/services/proxy-service-r5sbl:portname1/proxy/: foo (200; 8.872243ms) Sep 29 10:44:52.197: INFO: (17) /api/v1/namespaces/proxy-6977/pods/https:proxy-service-r5sbl-mn8hr:462/proxy/: tls qux (200; 2.776591ms) Sep 29 10:44:52.197: INFO: (17) /api/v1/namespaces/proxy-6977/pods/http:proxy-service-r5sbl-mn8hr:160/proxy/: foo (200; 2.832481ms) Sep 29 10:44:52.197: INFO: (17) /api/v1/namespaces/proxy-6977/pods/https:proxy-service-r5sbl-mn8hr:443/proxy/: ... (200; 2.90943ms) Sep 29 10:44:52.197: INFO: (17) /api/v1/namespaces/proxy-6977/pods/proxy-service-r5sbl-mn8hr/proxy/: test (200; 2.95803ms) Sep 29 10:44:52.197: INFO: (17) /api/v1/namespaces/proxy-6977/pods/proxy-service-r5sbl-mn8hr:160/proxy/: foo (200; 3.386859ms) Sep 29 10:44:52.198: INFO: (17) /api/v1/namespaces/proxy-6977/services/http:proxy-service-r5sbl:portname2/proxy/: bar (200; 3.468638ms) Sep 29 10:44:52.198: INFO: (17) /api/v1/namespaces/proxy-6977/pods/proxy-service-r5sbl-mn8hr:162/proxy/: bar (200; 4.011118ms) Sep 29 10:44:52.198: INFO: (17) /api/v1/namespaces/proxy-6977/services/http:proxy-service-r5sbl:portname1/proxy/: foo (200; 4.015389ms) Sep 29 10:44:52.198: INFO: (17) /api/v1/namespaces/proxy-6977/services/proxy-service-r5sbl:portname2/proxy/: bar (200; 4.019696ms) Sep 29 10:44:52.198: INFO: (17) /api/v1/namespaces/proxy-6977/services/proxy-service-r5sbl:portname1/proxy/: foo (200; 4.10821ms) Sep 29 10:44:52.198: INFO: (17) /api/v1/namespaces/proxy-6977/pods/proxy-service-r5sbl-mn8hr:1080/proxy/: test<... (200; 4.168168ms) Sep 29 10:44:52.198: INFO: (17) /api/v1/namespaces/proxy-6977/services/https:proxy-service-r5sbl:tlsportname1/proxy/: tls baz (200; 4.150064ms) Sep 29 10:44:52.198: INFO: (17) /api/v1/namespaces/proxy-6977/pods/http:proxy-service-r5sbl-mn8hr:162/proxy/: bar (200; 4.254828ms) Sep 29 10:44:52.198: INFO: (17) /api/v1/namespaces/proxy-6977/pods/https:proxy-service-r5sbl-mn8hr:460/proxy/: tls baz (200; 4.329773ms) Sep 29 10:44:52.198: INFO: (17) /api/v1/namespaces/proxy-6977/services/https:proxy-service-r5sbl:tlsportname2/proxy/: tls qux (200; 4.336839ms) Sep 29 10:44:52.205: INFO: (18) /api/v1/namespaces/proxy-6977/pods/https:proxy-service-r5sbl-mn8hr:460/proxy/: tls baz (200; 6.541152ms) Sep 29 10:44:52.205: INFO: (18) /api/v1/namespaces/proxy-6977/pods/https:proxy-service-r5sbl-mn8hr:443/proxy/: ... (200; 7.094362ms) Sep 29 10:44:52.206: INFO: (18) /api/v1/namespaces/proxy-6977/pods/https:proxy-service-r5sbl-mn8hr:462/proxy/: tls qux (200; 7.235725ms) Sep 29 10:44:52.206: INFO: (18) /api/v1/namespaces/proxy-6977/pods/http:proxy-service-r5sbl-mn8hr:162/proxy/: bar (200; 7.354253ms) Sep 29 10:44:52.206: INFO: (18) /api/v1/namespaces/proxy-6977/pods/proxy-service-r5sbl-mn8hr:162/proxy/: bar (200; 7.686299ms) Sep 29 10:44:52.208: INFO: (18) /api/v1/namespaces/proxy-6977/services/http:proxy-service-r5sbl:portname1/proxy/: foo (200; 9.084897ms) Sep 29 10:44:52.208: INFO: (18) /api/v1/namespaces/proxy-6977/services/proxy-service-r5sbl:portname2/proxy/: bar (200; 9.317681ms) Sep 29 10:44:52.215: INFO: (18) /api/v1/namespaces/proxy-6977/pods/http:proxy-service-r5sbl-mn8hr:160/proxy/: foo (200; 16.027554ms) Sep 29 10:44:52.215: INFO: (18) /api/v1/namespaces/proxy-6977/services/http:proxy-service-r5sbl:portname2/proxy/: bar (200; 16.062762ms) Sep 29 10:44:52.215: INFO: (18) /api/v1/namespaces/proxy-6977/services/https:proxy-service-r5sbl:tlsportname2/proxy/: tls qux (200; 16.068633ms) Sep 29 10:44:52.215: INFO: (18) /api/v1/namespaces/proxy-6977/pods/proxy-service-r5sbl-mn8hr:160/proxy/: foo (200; 16.116739ms) Sep 29 10:44:52.215: INFO: (18) /api/v1/namespaces/proxy-6977/pods/proxy-service-r5sbl-mn8hr/proxy/: test (200; 16.132648ms) Sep 29 10:44:52.215: INFO: (18) /api/v1/namespaces/proxy-6977/services/https:proxy-service-r5sbl:tlsportname1/proxy/: tls baz (200; 16.244103ms) Sep 29 10:44:52.219: INFO: (18) /api/v1/namespaces/proxy-6977/pods/proxy-service-r5sbl-mn8hr:1080/proxy/: test<... (200; 20.407968ms) Sep 29 10:44:52.229: INFO: (18) /api/v1/namespaces/proxy-6977/services/proxy-service-r5sbl:portname1/proxy/: foo (200; 30.921513ms) Sep 29 10:44:52.232: INFO: (19) /api/v1/namespaces/proxy-6977/services/http:proxy-service-r5sbl:portname2/proxy/: bar (200; 2.83907ms) Sep 29 10:44:52.233: INFO: (19) /api/v1/namespaces/proxy-6977/pods/http:proxy-service-r5sbl-mn8hr:160/proxy/: foo (200; 3.048732ms) Sep 29 10:44:52.233: INFO: (19) /api/v1/namespaces/proxy-6977/services/proxy-service-r5sbl:portname2/proxy/: bar (200; 3.418049ms) Sep 29 10:44:52.233: INFO: (19) /api/v1/namespaces/proxy-6977/pods/proxy-service-r5sbl-mn8hr:1080/proxy/: test<... (200; 3.483773ms) Sep 29 10:44:52.233: INFO: (19) /api/v1/namespaces/proxy-6977/pods/https:proxy-service-r5sbl-mn8hr:443/proxy/: test (200; 3.571547ms) Sep 29 10:44:52.233: INFO: (19) /api/v1/namespaces/proxy-6977/pods/https:proxy-service-r5sbl-mn8hr:462/proxy/: tls qux (200; 3.530746ms) Sep 29 10:44:52.233: INFO: (19) /api/v1/namespaces/proxy-6977/pods/http:proxy-service-r5sbl-mn8hr:1080/proxy/: ... (200; 3.54537ms) Sep 29 10:44:52.233: INFO: (19) /api/v1/namespaces/proxy-6977/pods/proxy-service-r5sbl-mn8hr:162/proxy/: bar (200; 3.548752ms) Sep 29 10:44:52.233: INFO: (19) /api/v1/namespaces/proxy-6977/services/http:proxy-service-r5sbl:portname1/proxy/: foo (200; 3.630792ms) Sep 29 10:44:52.233: INFO: (19) /api/v1/namespaces/proxy-6977/services/https:proxy-service-r5sbl:tlsportname2/proxy/: tls qux (200; 3.593543ms) Sep 29 10:44:52.233: INFO: (19) /api/v1/namespaces/proxy-6977/pods/proxy-service-r5sbl-mn8hr:160/proxy/: foo (200; 3.580306ms) Sep 29 10:44:52.233: INFO: (19) /api/v1/namespaces/proxy-6977/services/https:proxy-service-r5sbl:tlsportname1/proxy/: tls baz (200; 3.615881ms) Sep 29 10:44:52.233: INFO: (19) /api/v1/namespaces/proxy-6977/pods/https:proxy-service-r5sbl-mn8hr:460/proxy/: tls baz (200; 3.664985ms) Sep 29 10:44:52.233: INFO: (19) /api/v1/namespaces/proxy-6977/pods/http:proxy-service-r5sbl-mn8hr:162/proxy/: bar (200; 3.768771ms) Sep 29 10:44:52.234: INFO: (19) /api/v1/namespaces/proxy-6977/services/proxy-service-r5sbl:portname1/proxy/: foo (200; 4.105826ms) STEP: deleting ReplicationController proxy-service-r5sbl in namespace proxy-6977, will wait for the garbage collector to delete the pods Sep 29 10:44:52.293: INFO: Deleting ReplicationController proxy-service-r5sbl took: 6.646699ms Sep 29 10:44:52.693: INFO: Terminating ReplicationController proxy-service-r5sbl pods took: 400.278352ms [AfterEach] version v1 /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:44:54.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-6977" for this suite. • [SLOW TEST:8.137 seconds] [sig-network] Proxy /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59 should proxy through a service and a pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":303,"completed":45,"skipped":633,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:44:55.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs Sep 29 10:44:55.066: INFO: Waiting up to 5m0s for pod "pod-0c2c39d1-f36b-4e89-8e2a-b420afcc68dc" in namespace "emptydir-3315" to be "Succeeded or Failed" Sep 29 10:44:55.099: INFO: Pod "pod-0c2c39d1-f36b-4e89-8e2a-b420afcc68dc": Phase="Pending", Reason="", readiness=false. Elapsed: 32.174801ms Sep 29 10:44:57.103: INFO: Pod "pod-0c2c39d1-f36b-4e89-8e2a-b420afcc68dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036417298s Sep 29 10:44:59.107: INFO: Pod "pod-0c2c39d1-f36b-4e89-8e2a-b420afcc68dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04091725s STEP: Saw pod success Sep 29 10:44:59.107: INFO: Pod "pod-0c2c39d1-f36b-4e89-8e2a-b420afcc68dc" satisfied condition "Succeeded or Failed" Sep 29 10:44:59.110: INFO: Trying to get logs from node kali-worker pod pod-0c2c39d1-f36b-4e89-8e2a-b420afcc68dc container test-container: STEP: delete the pod Sep 29 10:44:59.171: INFO: Waiting for pod pod-0c2c39d1-f36b-4e89-8e2a-b420afcc68dc to disappear Sep 29 10:44:59.201: INFO: Pod pod-0c2c39d1-f36b-4e89-8e2a-b420afcc68dc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:44:59.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3315" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":46,"skipped":643,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:44:59.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:45:30.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-7853" for this suite. STEP: Destroying namespace "nsdeletetest-2260" for this suite. Sep 29 10:45:30.556: INFO: Namespace nsdeletetest-2260 was already deleted STEP: Destroying namespace "nsdeletetest-9891" for this suite. • [SLOW TEST:31.351 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":303,"completed":47,"skipped":651,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:45:30.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:45:30.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1446" for this suite. STEP: Destroying namespace "nspatchtest-bff0051c-2d4d-45a1-aa55-eb44ead313ed-637" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":303,"completed":48,"skipped":677,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:45:30.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 29 10:45:30.816: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:45:31.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6868" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":303,"completed":49,"skipped":685,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:45:31.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating cluster-info Sep 29 10:45:31.962: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config cluster-info' Sep 29 10:45:32.064: INFO: stderr: "" Sep 29 10:45:32.065: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:34561\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:34561/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:45:32.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6248" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":303,"completed":50,"skipped":695,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:45:32.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 29 10:45:32.574: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 29 10:45:34.586: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736973132, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736973132, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736973132, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736973132, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 29 10:45:37.677: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Sep 29 10:45:41.755: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config attach --namespace=webhook-3669 to-be-attached-pod -i -c=container1' Sep 29 10:45:41.873: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:45:41.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3669" for this suite. STEP: Destroying namespace "webhook-3669-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.921 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":303,"completed":51,"skipped":709,"failed":0} SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:45:41.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-3373 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet Sep 29 10:45:42.086: INFO: Found 0 stateful pods, waiting for 3 Sep 29 10:45:52.101: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Sep 29 10:45:52.101: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Sep 29 10:45:52.101: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Sep 29 10:46:02.090: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Sep 29 10:46:02.090: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Sep 29 10:46:02.090: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Sep 29 10:46:02.117: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Sep 29 10:46:12.229: INFO: Updating stateful set ss2 Sep 29 10:46:12.277: INFO: Waiting for Pod statefulset-3373/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Sep 29 10:46:22.285: INFO: Waiting for Pod statefulset-3373/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Sep 29 10:46:33.666: INFO: Found 2 stateful pods, waiting for 3 Sep 29 10:46:43.673: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Sep 29 10:46:43.673: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Sep 29 10:46:43.673: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Sep 29 10:46:43.699: INFO: Updating stateful set ss2 Sep 29 10:46:43.740: INFO: Waiting for Pod statefulset-3373/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Sep 29 10:46:53.770: INFO: Updating stateful set ss2 Sep 29 10:46:53.813: INFO: Waiting for StatefulSet statefulset-3373/ss2 to complete update Sep 29 10:46:53.813: INFO: Waiting for Pod statefulset-3373/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Sep 29 10:47:03.823: INFO: Deleting all statefulset in ns statefulset-3373 Sep 29 10:47:03.827: INFO: Scaling statefulset ss2 to 0 Sep 29 10:47:23.884: INFO: Waiting for statefulset status.replicas updated to 0 Sep 29 10:47:23.887: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:47:23.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3373" for this suite. • [SLOW TEST:101.917 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":303,"completed":52,"skipped":712,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:47:23.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0929 10:47:34.047986 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Sep 29 10:48:36.067: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:48:36.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6032" for this suite. • [SLOW TEST:72.169 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":303,"completed":53,"skipped":719,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:48:36.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Sep 29 10:48:36.181: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 10:48:36.195: INFO: Number of nodes with available pods: 0 Sep 29 10:48:36.196: INFO: Node kali-worker is running more than one daemon pod Sep 29 10:48:37.258: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 10:48:37.261: INFO: Number of nodes with available pods: 0 Sep 29 10:48:37.261: INFO: Node kali-worker is running more than one daemon pod Sep 29 10:48:38.271: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 10:48:38.274: INFO: Number of nodes with available pods: 0 Sep 29 10:48:38.274: INFO: Node kali-worker is running more than one daemon pod Sep 29 10:48:39.203: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 10:48:39.207: INFO: Number of nodes with available pods: 0 Sep 29 10:48:39.207: INFO: Node kali-worker is running more than one daemon pod Sep 29 10:48:40.202: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 10:48:40.206: INFO: Number of nodes with available pods: 1 Sep 29 10:48:40.206: INFO: Node kali-worker2 is running more than one daemon pod Sep 29 10:48:41.204: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 10:48:41.209: INFO: Number of nodes with available pods: 2 Sep 29 10:48:41.209: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Sep 29 10:48:41.282: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 10:48:41.301: INFO: Number of nodes with available pods: 1 Sep 29 10:48:41.302: INFO: Node kali-worker2 is running more than one daemon pod Sep 29 10:48:42.354: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 10:48:42.358: INFO: Number of nodes with available pods: 1 Sep 29 10:48:42.358: INFO: Node kali-worker2 is running more than one daemon pod Sep 29 10:48:43.306: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 10:48:43.310: INFO: Number of nodes with available pods: 1 Sep 29 10:48:43.310: INFO: Node kali-worker2 is running more than one daemon pod Sep 29 10:48:44.307: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 10:48:44.311: INFO: Number of nodes with available pods: 1 Sep 29 10:48:44.311: INFO: Node kali-worker2 is running more than one daemon pod Sep 29 10:48:45.313: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 10:48:45.317: INFO: Number of nodes with available pods: 1 Sep 29 10:48:45.317: INFO: Node kali-worker2 is running more than one daemon pod Sep 29 10:48:46.309: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 10:48:46.313: INFO: Number of nodes with available pods: 1 Sep 29 10:48:46.313: INFO: Node kali-worker2 is running more than one daemon pod Sep 29 10:48:47.307: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 10:48:47.317: INFO: Number of nodes with available pods: 1 Sep 29 10:48:47.317: INFO: Node kali-worker2 is running more than one daemon pod Sep 29 10:48:48.307: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 10:48:48.317: INFO: Number of nodes with available pods: 1 Sep 29 10:48:48.317: INFO: Node kali-worker2 is running more than one daemon pod Sep 29 10:48:49.330: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 10:48:49.347: INFO: Number of nodes with available pods: 1 Sep 29 10:48:49.347: INFO: Node kali-worker2 is running more than one daemon pod Sep 29 10:48:50.307: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 10:48:50.310: INFO: Number of nodes with available pods: 1 Sep 29 10:48:50.310: INFO: Node kali-worker2 is running more than one daemon pod Sep 29 10:48:51.307: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 10:48:51.311: INFO: Number of nodes with available pods: 1 Sep 29 10:48:51.311: INFO: Node kali-worker2 is running more than one daemon pod Sep 29 10:48:52.307: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 10:48:52.311: INFO: Number of nodes with available pods: 2 Sep 29 10:48:52.311: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4381, will wait for the garbage collector to delete the pods Sep 29 10:48:52.375: INFO: Deleting DaemonSet.extensions daemon-set took: 8.325124ms Sep 29 10:48:52.776: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.208356ms Sep 29 10:48:58.179: INFO: Number of nodes with available pods: 0 Sep 29 10:48:58.179: INFO: Number of running nodes: 0, number of available pods: 0 Sep 29 10:48:58.186: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4381/daemonsets","resourceVersion":"1597174"},"items":null} Sep 29 10:48:58.190: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4381/pods","resourceVersion":"1597174"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:48:58.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4381" for this suite. • [SLOW TEST:22.129 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":303,"completed":54,"skipped":743,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:48:58.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-de893f9a-5b69-41e1-900c-92ca3d597d68 STEP: Creating a pod to test consume configMaps Sep 29 10:48:58.349: INFO: Waiting up to 5m0s for pod "pod-configmaps-27edcf02-15ef-44ae-b76c-44a6316ed49a" in namespace "configmap-1493" to be "Succeeded or Failed" Sep 29 10:48:58.353: INFO: Pod "pod-configmaps-27edcf02-15ef-44ae-b76c-44a6316ed49a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.644373ms Sep 29 10:49:00.358: INFO: Pod "pod-configmaps-27edcf02-15ef-44ae-b76c-44a6316ed49a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009011148s Sep 29 10:49:02.363: INFO: Pod "pod-configmaps-27edcf02-15ef-44ae-b76c-44a6316ed49a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013948425s STEP: Saw pod success Sep 29 10:49:02.363: INFO: Pod "pod-configmaps-27edcf02-15ef-44ae-b76c-44a6316ed49a" satisfied condition "Succeeded or Failed" Sep 29 10:49:02.366: INFO: Trying to get logs from node kali-worker pod pod-configmaps-27edcf02-15ef-44ae-b76c-44a6316ed49a container configmap-volume-test: STEP: delete the pod Sep 29 10:49:02.415: INFO: Waiting for pod pod-configmaps-27edcf02-15ef-44ae-b76c-44a6316ed49a to disappear Sep 29 10:49:02.437: INFO: Pod pod-configmaps-27edcf02-15ef-44ae-b76c-44a6316ed49a no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:49:02.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1493" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":55,"skipped":772,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:49:02.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-4830 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-4830 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4830 Sep 29 10:49:02.557: INFO: Found 0 stateful pods, waiting for 1 Sep 29 10:49:12.561: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Sep 29 10:49:12.565: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4830 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 29 10:49:15.392: INFO: stderr: "I0929 10:49:15.273169 243 log.go:181] (0xc00003a0b0) (0xc0001d2140) Create stream\nI0929 10:49:15.273232 243 log.go:181] (0xc00003a0b0) (0xc0001d2140) Stream added, broadcasting: 1\nI0929 10:49:15.274963 243 log.go:181] (0xc00003a0b0) Reply frame received for 1\nI0929 10:49:15.275005 243 log.go:181] (0xc00003a0b0) (0xc00055c1e0) Create stream\nI0929 10:49:15.275021 243 log.go:181] (0xc00003a0b0) (0xc00055c1e0) Stream added, broadcasting: 3\nI0929 10:49:15.275797 243 log.go:181] (0xc00003a0b0) Reply frame received for 3\nI0929 10:49:15.275826 243 log.go:181] (0xc00003a0b0) (0xc00055c280) Create stream\nI0929 10:49:15.275838 243 log.go:181] (0xc00003a0b0) (0xc00055c280) Stream added, broadcasting: 5\nI0929 10:49:15.276735 243 log.go:181] (0xc00003a0b0) Reply frame received for 5\nI0929 10:49:15.354077 243 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0929 10:49:15.354106 243 log.go:181] (0xc00055c280) (5) Data frame handling\nI0929 10:49:15.354126 243 log.go:181] (0xc00055c280) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0929 10:49:15.384536 243 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0929 10:49:15.384563 243 log.go:181] (0xc00055c1e0) (3) Data frame handling\nI0929 10:49:15.384590 243 log.go:181] (0xc00055c1e0) (3) Data frame sent\nI0929 10:49:15.384948 243 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0929 10:49:15.384993 243 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0929 10:49:15.385035 243 log.go:181] (0xc00055c280) (5) Data frame handling\nI0929 10:49:15.385073 243 log.go:181] (0xc00055c1e0) (3) Data frame handling\nI0929 10:49:15.386777 243 log.go:181] (0xc00003a0b0) Data frame received for 1\nI0929 10:49:15.386812 243 log.go:181] (0xc0001d2140) (1) Data frame handling\nI0929 10:49:15.386827 243 log.go:181] (0xc0001d2140) (1) Data frame sent\nI0929 10:49:15.386842 243 log.go:181] (0xc00003a0b0) (0xc0001d2140) Stream removed, broadcasting: 1\nI0929 10:49:15.386866 243 log.go:181] (0xc00003a0b0) Go away received\nI0929 10:49:15.387367 243 log.go:181] (0xc00003a0b0) (0xc0001d2140) Stream removed, broadcasting: 1\nI0929 10:49:15.387405 243 log.go:181] (0xc00003a0b0) (0xc00055c1e0) Stream removed, broadcasting: 3\nI0929 10:49:15.387431 243 log.go:181] (0xc00003a0b0) (0xc00055c280) Stream removed, broadcasting: 5\n" Sep 29 10:49:15.393: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 29 10:49:15.393: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 29 10:49:15.397: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Sep 29 10:49:25.403: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Sep 29 10:49:25.403: INFO: Waiting for statefulset status.replicas updated to 0 Sep 29 10:49:25.447: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999541s Sep 29 10:49:26.452: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.980495778s Sep 29 10:49:27.467: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.975885013s Sep 29 10:49:28.472: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.960752913s Sep 29 10:49:29.476: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.955824802s Sep 29 10:49:30.481: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.951080319s Sep 29 10:49:31.515: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.946724772s Sep 29 10:49:32.520: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.912094568s Sep 29 10:49:33.526: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.90784846s Sep 29 10:49:34.530: INFO: Verifying statefulset ss doesn't scale past 1 for another 901.734249ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4830 Sep 29 10:49:35.536: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4830 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 29 10:49:35.795: INFO: stderr: "I0929 10:49:35.703267 261 log.go:181] (0xc000641080) (0xc0005aa820) Create stream\nI0929 10:49:35.703333 261 log.go:181] (0xc000641080) (0xc0005aa820) Stream added, broadcasting: 1\nI0929 10:49:35.708412 261 log.go:181] (0xc000641080) Reply frame received for 1\nI0929 10:49:35.708461 261 log.go:181] (0xc000641080) (0xc0005aa000) Create stream\nI0929 10:49:35.708474 261 log.go:181] (0xc000641080) (0xc0005aa000) Stream added, broadcasting: 3\nI0929 10:49:35.709504 261 log.go:181] (0xc000641080) Reply frame received for 3\nI0929 10:49:35.709554 261 log.go:181] (0xc000641080) (0xc000ce2000) Create stream\nI0929 10:49:35.709568 261 log.go:181] (0xc000641080) (0xc000ce2000) Stream added, broadcasting: 5\nI0929 10:49:35.710654 261 log.go:181] (0xc000641080) Reply frame received for 5\nI0929 10:49:35.788742 261 log.go:181] (0xc000641080) Data frame received for 3\nI0929 10:49:35.788771 261 log.go:181] (0xc0005aa000) (3) Data frame handling\nI0929 10:49:35.788789 261 log.go:181] (0xc0005aa000) (3) Data frame sent\nI0929 10:49:35.788901 261 log.go:181] (0xc000641080) Data frame received for 5\nI0929 10:49:35.788926 261 log.go:181] (0xc000ce2000) (5) Data frame handling\nI0929 10:49:35.788940 261 log.go:181] (0xc000ce2000) (5) Data frame sent\nI0929 10:49:35.788952 261 log.go:181] (0xc000641080) Data frame received for 5\nI0929 10:49:35.788962 261 log.go:181] (0xc000ce2000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0929 10:49:35.788990 261 log.go:181] (0xc000641080) Data frame received for 3\nI0929 10:49:35.789003 261 log.go:181] (0xc0005aa000) (3) Data frame handling\nI0929 10:49:35.790370 261 log.go:181] (0xc000641080) Data frame received for 1\nI0929 10:49:35.790416 261 log.go:181] (0xc0005aa820) (1) Data frame handling\nI0929 10:49:35.790431 261 log.go:181] (0xc0005aa820) (1) Data frame sent\nI0929 10:49:35.790448 261 log.go:181] (0xc000641080) (0xc0005aa820) Stream removed, broadcasting: 1\nI0929 10:49:35.790472 261 log.go:181] (0xc000641080) Go away received\nI0929 10:49:35.790910 261 log.go:181] (0xc000641080) (0xc0005aa820) Stream removed, broadcasting: 1\nI0929 10:49:35.790928 261 log.go:181] (0xc000641080) (0xc0005aa000) Stream removed, broadcasting: 3\nI0929 10:49:35.790937 261 log.go:181] (0xc000641080) (0xc000ce2000) Stream removed, broadcasting: 5\n" Sep 29 10:49:35.795: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 29 10:49:35.795: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 29 10:49:35.798: INFO: Found 1 stateful pods, waiting for 3 Sep 29 10:49:45.803: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Sep 29 10:49:45.803: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Sep 29 10:49:45.803: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Sep 29 10:49:45.808: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4830 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 29 10:49:46.021: INFO: stderr: "I0929 10:49:45.935917 279 log.go:181] (0xc0005c4000) (0xc00073a140) Create stream\nI0929 10:49:45.935985 279 log.go:181] (0xc0005c4000) (0xc00073a140) Stream added, broadcasting: 1\nI0929 10:49:45.938155 279 log.go:181] (0xc0005c4000) Reply frame received for 1\nI0929 10:49:45.938203 279 log.go:181] (0xc0005c4000) (0xc000f12000) Create stream\nI0929 10:49:45.938214 279 log.go:181] (0xc0005c4000) (0xc000f12000) Stream added, broadcasting: 3\nI0929 10:49:45.939503 279 log.go:181] (0xc0005c4000) Reply frame received for 3\nI0929 10:49:45.939560 279 log.go:181] (0xc0005c4000) (0xc000f120a0) Create stream\nI0929 10:49:45.939577 279 log.go:181] (0xc0005c4000) (0xc000f120a0) Stream added, broadcasting: 5\nI0929 10:49:45.940704 279 log.go:181] (0xc0005c4000) Reply frame received for 5\nI0929 10:49:46.015652 279 log.go:181] (0xc0005c4000) Data frame received for 5\nI0929 10:49:46.015683 279 log.go:181] (0xc000f120a0) (5) Data frame handling\nI0929 10:49:46.015699 279 log.go:181] (0xc000f120a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0929 10:49:46.015747 279 log.go:181] (0xc0005c4000) Data frame received for 5\nI0929 10:49:46.015759 279 log.go:181] (0xc000f120a0) (5) Data frame handling\nI0929 10:49:46.015794 279 log.go:181] (0xc0005c4000) Data frame received for 3\nI0929 10:49:46.015829 279 log.go:181] (0xc000f12000) (3) Data frame handling\nI0929 10:49:46.015849 279 log.go:181] (0xc000f12000) (3) Data frame sent\nI0929 10:49:46.015861 279 log.go:181] (0xc0005c4000) Data frame received for 3\nI0929 10:49:46.015868 279 log.go:181] (0xc000f12000) (3) Data frame handling\nI0929 10:49:46.017473 279 log.go:181] (0xc0005c4000) Data frame received for 1\nI0929 10:49:46.017491 279 log.go:181] (0xc00073a140) (1) Data frame handling\nI0929 10:49:46.017513 279 log.go:181] (0xc00073a140) (1) Data frame sent\nI0929 10:49:46.017567 279 log.go:181] (0xc0005c4000) (0xc00073a140) Stream removed, broadcasting: 1\nI0929 10:49:46.017612 279 log.go:181] (0xc0005c4000) Go away received\nI0929 10:49:46.017858 279 log.go:181] (0xc0005c4000) (0xc00073a140) Stream removed, broadcasting: 1\nI0929 10:49:46.017871 279 log.go:181] (0xc0005c4000) (0xc000f12000) Stream removed, broadcasting: 3\nI0929 10:49:46.017880 279 log.go:181] (0xc0005c4000) (0xc000f120a0) Stream removed, broadcasting: 5\n" Sep 29 10:49:46.021: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 29 10:49:46.021: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 29 10:49:46.021: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4830 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 29 10:49:46.287: INFO: stderr: "I0929 10:49:46.169960 298 log.go:181] (0xc0007334a0) (0xc00072a960) Create stream\nI0929 10:49:46.170017 298 log.go:181] (0xc0007334a0) (0xc00072a960) Stream added, broadcasting: 1\nI0929 10:49:46.176099 298 log.go:181] (0xc0007334a0) Reply frame received for 1\nI0929 10:49:46.176144 298 log.go:181] (0xc0007334a0) (0xc00072a000) Create stream\nI0929 10:49:46.176157 298 log.go:181] (0xc0007334a0) (0xc00072a000) Stream added, broadcasting: 3\nI0929 10:49:46.177191 298 log.go:181] (0xc0007334a0) Reply frame received for 3\nI0929 10:49:46.177232 298 log.go:181] (0xc0007334a0) (0xc00072a0a0) Create stream\nI0929 10:49:46.177243 298 log.go:181] (0xc0007334a0) (0xc00072a0a0) Stream added, broadcasting: 5\nI0929 10:49:46.178110 298 log.go:181] (0xc0007334a0) Reply frame received for 5\nI0929 10:49:46.248376 298 log.go:181] (0xc0007334a0) Data frame received for 5\nI0929 10:49:46.248406 298 log.go:181] (0xc00072a0a0) (5) Data frame handling\nI0929 10:49:46.248423 298 log.go:181] (0xc00072a0a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0929 10:49:46.278980 298 log.go:181] (0xc0007334a0) Data frame received for 5\nI0929 10:49:46.279004 298 log.go:181] (0xc00072a0a0) (5) Data frame handling\nI0929 10:49:46.279018 298 log.go:181] (0xc0007334a0) Data frame received for 3\nI0929 10:49:46.279023 298 log.go:181] (0xc00072a000) (3) Data frame handling\nI0929 10:49:46.279029 298 log.go:181] (0xc00072a000) (3) Data frame sent\nI0929 10:49:46.279335 298 log.go:181] (0xc0007334a0) Data frame received for 3\nI0929 10:49:46.279455 298 log.go:181] (0xc00072a000) (3) Data frame handling\nI0929 10:49:46.281589 298 log.go:181] (0xc0007334a0) Data frame received for 1\nI0929 10:49:46.281613 298 log.go:181] (0xc00072a960) (1) Data frame handling\nI0929 10:49:46.281625 298 log.go:181] (0xc00072a960) (1) Data frame sent\nI0929 10:49:46.281642 298 log.go:181] (0xc0007334a0) (0xc00072a960) Stream removed, broadcasting: 1\nI0929 10:49:46.281665 298 log.go:181] (0xc0007334a0) Go away received\nI0929 10:49:46.282156 298 log.go:181] (0xc0007334a0) (0xc00072a960) Stream removed, broadcasting: 1\nI0929 10:49:46.282187 298 log.go:181] (0xc0007334a0) (0xc00072a000) Stream removed, broadcasting: 3\nI0929 10:49:46.282201 298 log.go:181] (0xc0007334a0) (0xc00072a0a0) Stream removed, broadcasting: 5\n" Sep 29 10:49:46.287: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 29 10:49:46.287: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 29 10:49:46.287: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4830 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 29 10:49:46.616: INFO: stderr: "I0929 10:49:46.477874 316 log.go:181] (0xc0007d9080) (0xc000b46960) Create stream\nI0929 10:49:46.477979 316 log.go:181] (0xc0007d9080) (0xc000b46960) Stream added, broadcasting: 1\nI0929 10:49:46.481869 316 log.go:181] (0xc0007d9080) Reply frame received for 1\nI0929 10:49:46.482000 316 log.go:181] (0xc0007d9080) (0xc000b46a00) Create stream\nI0929 10:49:46.482054 316 log.go:181] (0xc0007d9080) (0xc000b46a00) Stream added, broadcasting: 3\nI0929 10:49:46.483757 316 log.go:181] (0xc0007d9080) Reply frame received for 3\nI0929 10:49:46.483792 316 log.go:181] (0xc0007d9080) (0xc000d0e320) Create stream\nI0929 10:49:46.483827 316 log.go:181] (0xc0007d9080) (0xc000d0e320) Stream added, broadcasting: 5\nI0929 10:49:46.484692 316 log.go:181] (0xc0007d9080) Reply frame received for 5\nI0929 10:49:46.552107 316 log.go:181] (0xc0007d9080) Data frame received for 5\nI0929 10:49:46.552140 316 log.go:181] (0xc000d0e320) (5) Data frame handling\nI0929 10:49:46.552160 316 log.go:181] (0xc000d0e320) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0929 10:49:46.607595 316 log.go:181] (0xc0007d9080) Data frame received for 3\nI0929 10:49:46.607629 316 log.go:181] (0xc000b46a00) (3) Data frame handling\nI0929 10:49:46.607660 316 log.go:181] (0xc000b46a00) (3) Data frame sent\nI0929 10:49:46.607919 316 log.go:181] (0xc0007d9080) Data frame received for 5\nI0929 10:49:46.607940 316 log.go:181] (0xc000d0e320) (5) Data frame handling\nI0929 10:49:46.608020 316 log.go:181] (0xc0007d9080) Data frame received for 3\nI0929 10:49:46.608061 316 log.go:181] (0xc000b46a00) (3) Data frame handling\nI0929 10:49:46.610335 316 log.go:181] (0xc0007d9080) Data frame received for 1\nI0929 10:49:46.610355 316 log.go:181] (0xc000b46960) (1) Data frame handling\nI0929 10:49:46.610363 316 log.go:181] (0xc000b46960) (1) Data frame sent\nI0929 10:49:46.610374 316 log.go:181] (0xc0007d9080) (0xc000b46960) Stream removed, broadcasting: 1\nI0929 10:49:46.610551 316 log.go:181] (0xc0007d9080) Go away received\nI0929 10:49:46.610644 316 log.go:181] (0xc0007d9080) (0xc000b46960) Stream removed, broadcasting: 1\nI0929 10:49:46.610658 316 log.go:181] (0xc0007d9080) (0xc000b46a00) Stream removed, broadcasting: 3\nI0929 10:49:46.610663 316 log.go:181] (0xc0007d9080) (0xc000d0e320) Stream removed, broadcasting: 5\n" Sep 29 10:49:46.616: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 29 10:49:46.616: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 29 10:49:46.616: INFO: Waiting for statefulset status.replicas updated to 0 Sep 29 10:49:46.624: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Sep 29 10:49:56.630: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Sep 29 10:49:56.630: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Sep 29 10:49:56.630: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Sep 29 10:49:56.643: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999547s Sep 29 10:49:57.648: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.993544184s Sep 29 10:49:58.653: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.988498171s Sep 29 10:49:59.659: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.983738423s Sep 29 10:50:00.665: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.977478355s Sep 29 10:50:01.670: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.971411983s Sep 29 10:50:02.675: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.965871208s Sep 29 10:50:03.681: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.961615095s Sep 29 10:50:04.687: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.955667339s Sep 29 10:50:05.692: INFO: Verifying statefulset ss doesn't scale past 3 for another 949.080472ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4830 Sep 29 10:50:06.697: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4830 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 29 10:50:06.941: INFO: stderr: "I0929 10:50:06.838823 335 log.go:181] (0xc000e95080) (0xc0004add60) Create stream\nI0929 10:50:06.838871 335 log.go:181] (0xc000e95080) (0xc0004add60) Stream added, broadcasting: 1\nI0929 10:50:06.844517 335 log.go:181] (0xc000e95080) Reply frame received for 1\nI0929 10:50:06.844576 335 log.go:181] (0xc000e95080) (0xc0004ac500) Create stream\nI0929 10:50:06.844599 335 log.go:181] (0xc000e95080) (0xc0004ac500) Stream added, broadcasting: 3\nI0929 10:50:06.845722 335 log.go:181] (0xc000e95080) Reply frame received for 3\nI0929 10:50:06.845758 335 log.go:181] (0xc000e95080) (0xc000716140) Create stream\nI0929 10:50:06.845769 335 log.go:181] (0xc000e95080) (0xc000716140) Stream added, broadcasting: 5\nI0929 10:50:06.846552 335 log.go:181] (0xc000e95080) Reply frame received for 5\nI0929 10:50:06.930331 335 log.go:181] (0xc000e95080) Data frame received for 3\nI0929 10:50:06.930495 335 log.go:181] (0xc0004ac500) (3) Data frame handling\nI0929 10:50:06.930533 335 log.go:181] (0xc0004ac500) (3) Data frame sent\nI0929 10:50:06.930553 335 log.go:181] (0xc000e95080) Data frame received for 3\nI0929 10:50:06.930569 335 log.go:181] (0xc0004ac500) (3) Data frame handling\nI0929 10:50:06.930648 335 log.go:181] (0xc000e95080) Data frame received for 5\nI0929 10:50:06.930709 335 log.go:181] (0xc000716140) (5) Data frame handling\nI0929 10:50:06.930743 335 log.go:181] (0xc000716140) (5) Data frame sent\nI0929 10:50:06.930767 335 log.go:181] (0xc000e95080) Data frame received for 5\nI0929 10:50:06.930789 335 log.go:181] (0xc000716140) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0929 10:50:06.936008 335 log.go:181] (0xc000e95080) Data frame received for 1\nI0929 10:50:06.936033 335 log.go:181] (0xc0004add60) (1) Data frame handling\nI0929 10:50:06.936047 335 log.go:181] (0xc0004add60) (1) Data frame sent\nI0929 10:50:06.936057 335 log.go:181] (0xc000e95080) (0xc0004add60) Stream removed, broadcasting: 1\nI0929 10:50:06.936070 335 log.go:181] (0xc000e95080) Go away received\nI0929 10:50:06.936461 335 log.go:181] (0xc000e95080) (0xc0004add60) Stream removed, broadcasting: 1\nI0929 10:50:06.936485 335 log.go:181] (0xc000e95080) (0xc0004ac500) Stream removed, broadcasting: 3\nI0929 10:50:06.936496 335 log.go:181] (0xc000e95080) (0xc000716140) Stream removed, broadcasting: 5\n" Sep 29 10:50:06.941: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 29 10:50:06.941: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 29 10:50:06.941: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4830 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 29 10:50:07.152: INFO: stderr: "I0929 10:50:07.084933 353 log.go:181] (0xc0005cb8c0) (0xc0005c2b40) Create stream\nI0929 10:50:07.085001 353 log.go:181] (0xc0005cb8c0) (0xc0005c2b40) Stream added, broadcasting: 1\nI0929 10:50:07.090862 353 log.go:181] (0xc0005cb8c0) Reply frame received for 1\nI0929 10:50:07.090922 353 log.go:181] (0xc0005cb8c0) (0xc0005c2000) Create stream\nI0929 10:50:07.090944 353 log.go:181] (0xc0005cb8c0) (0xc0005c2000) Stream added, broadcasting: 3\nI0929 10:50:07.092115 353 log.go:181] (0xc0005cb8c0) Reply frame received for 3\nI0929 10:50:07.092189 353 log.go:181] (0xc0005cb8c0) (0xc000e00000) Create stream\nI0929 10:50:07.092220 353 log.go:181] (0xc0005cb8c0) (0xc000e00000) Stream added, broadcasting: 5\nI0929 10:50:07.093311 353 log.go:181] (0xc0005cb8c0) Reply frame received for 5\nI0929 10:50:07.145762 353 log.go:181] (0xc0005cb8c0) Data frame received for 3\nI0929 10:50:07.145792 353 log.go:181] (0xc0005c2000) (3) Data frame handling\nI0929 10:50:07.145812 353 log.go:181] (0xc0005c2000) (3) Data frame sent\nI0929 10:50:07.145824 353 log.go:181] (0xc0005cb8c0) Data frame received for 3\nI0929 10:50:07.145835 353 log.go:181] (0xc0005c2000) (3) Data frame handling\nI0929 10:50:07.145886 353 log.go:181] (0xc0005cb8c0) Data frame received for 5\nI0929 10:50:07.145913 353 log.go:181] (0xc000e00000) (5) Data frame handling\nI0929 10:50:07.145931 353 log.go:181] (0xc000e00000) (5) Data frame sent\nI0929 10:50:07.145946 353 log.go:181] (0xc0005cb8c0) Data frame received for 5\nI0929 10:50:07.145956 353 log.go:181] (0xc000e00000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0929 10:50:07.147619 353 log.go:181] (0xc0005cb8c0) Data frame received for 1\nI0929 10:50:07.147638 353 log.go:181] (0xc0005c2b40) (1) Data frame handling\nI0929 10:50:07.147663 353 log.go:181] (0xc0005c2b40) (1) Data frame sent\nI0929 10:50:07.147680 353 log.go:181] (0xc0005cb8c0) (0xc0005c2b40) Stream removed, broadcasting: 1\nI0929 10:50:07.147738 353 log.go:181] (0xc0005cb8c0) Go away received\nI0929 10:50:07.147948 353 log.go:181] (0xc0005cb8c0) (0xc0005c2b40) Stream removed, broadcasting: 1\nI0929 10:50:07.147966 353 log.go:181] (0xc0005cb8c0) (0xc0005c2000) Stream removed, broadcasting: 3\nI0929 10:50:07.147977 353 log.go:181] (0xc0005cb8c0) (0xc000e00000) Stream removed, broadcasting: 5\n" Sep 29 10:50:07.152: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 29 10:50:07.152: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 29 10:50:07.152: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4830 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 29 10:50:07.369: INFO: stderr: "I0929 10:50:07.282175 371 log.go:181] (0xc0007b6fd0) (0xc0007ae820) Create stream\nI0929 10:50:07.282226 371 log.go:181] (0xc0007b6fd0) (0xc0007ae820) Stream added, broadcasting: 1\nI0929 10:50:07.285141 371 log.go:181] (0xc0007b6fd0) Reply frame received for 1\nI0929 10:50:07.285188 371 log.go:181] (0xc0007b6fd0) (0xc0001723c0) Create stream\nI0929 10:50:07.285202 371 log.go:181] (0xc0007b6fd0) (0xc0001723c0) Stream added, broadcasting: 3\nI0929 10:50:07.286250 371 log.go:181] (0xc0007b6fd0) Reply frame received for 3\nI0929 10:50:07.286298 371 log.go:181] (0xc0007b6fd0) (0xc000bbe000) Create stream\nI0929 10:50:07.286315 371 log.go:181] (0xc0007b6fd0) (0xc000bbe000) Stream added, broadcasting: 5\nI0929 10:50:07.287241 371 log.go:181] (0xc0007b6fd0) Reply frame received for 5\nI0929 10:50:07.363470 371 log.go:181] (0xc0007b6fd0) Data frame received for 3\nI0929 10:50:07.363509 371 log.go:181] (0xc0001723c0) (3) Data frame handling\nI0929 10:50:07.363521 371 log.go:181] (0xc0001723c0) (3) Data frame sent\nI0929 10:50:07.363531 371 log.go:181] (0xc0007b6fd0) Data frame received for 3\nI0929 10:50:07.363537 371 log.go:181] (0xc0001723c0) (3) Data frame handling\nI0929 10:50:07.363565 371 log.go:181] (0xc0007b6fd0) Data frame received for 5\nI0929 10:50:07.363575 371 log.go:181] (0xc000bbe000) (5) Data frame handling\nI0929 10:50:07.363588 371 log.go:181] (0xc000bbe000) (5) Data frame sent\nI0929 10:50:07.363597 371 log.go:181] (0xc0007b6fd0) Data frame received for 5\nI0929 10:50:07.363604 371 log.go:181] (0xc000bbe000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0929 10:50:07.364743 371 log.go:181] (0xc0007b6fd0) Data frame received for 1\nI0929 10:50:07.364776 371 log.go:181] (0xc0007ae820) (1) Data frame handling\nI0929 10:50:07.364791 371 log.go:181] (0xc0007ae820) (1) Data frame sent\nI0929 10:50:07.364805 371 log.go:181] (0xc0007b6fd0) (0xc0007ae820) Stream removed, broadcasting: 1\nI0929 10:50:07.364918 371 log.go:181] (0xc0007b6fd0) Go away received\nI0929 10:50:07.365277 371 log.go:181] (0xc0007b6fd0) (0xc0007ae820) Stream removed, broadcasting: 1\nI0929 10:50:07.365292 371 log.go:181] (0xc0007b6fd0) (0xc0001723c0) Stream removed, broadcasting: 3\nI0929 10:50:07.365298 371 log.go:181] (0xc0007b6fd0) (0xc000bbe000) Stream removed, broadcasting: 5\n" Sep 29 10:50:07.369: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 29 10:50:07.369: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 29 10:50:07.369: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Sep 29 10:50:37.439: INFO: Deleting all statefulset in ns statefulset-4830 Sep 29 10:50:37.443: INFO: Scaling statefulset ss to 0 Sep 29 10:50:37.455: INFO: Waiting for statefulset status.replicas updated to 0 Sep 29 10:50:37.457: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:50:37.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4830" for this suite. • [SLOW TEST:95.037 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":303,"completed":56,"skipped":818,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:50:37.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should add annotations for pods in rc [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC Sep 29 10:50:37.591: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2357' Sep 29 10:50:37.966: INFO: stderr: "" Sep 29 10:50:37.966: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Sep 29 10:50:38.970: INFO: Selector matched 1 pods for map[app:agnhost] Sep 29 10:50:38.970: INFO: Found 0 / 1 Sep 29 10:50:39.970: INFO: Selector matched 1 pods for map[app:agnhost] Sep 29 10:50:39.970: INFO: Found 0 / 1 Sep 29 10:50:40.971: INFO: Selector matched 1 pods for map[app:agnhost] Sep 29 10:50:40.971: INFO: Found 0 / 1 Sep 29 10:50:41.972: INFO: Selector matched 1 pods for map[app:agnhost] Sep 29 10:50:41.972: INFO: Found 1 / 1 Sep 29 10:50:41.972: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Sep 29 10:50:41.975: INFO: Selector matched 1 pods for map[app:agnhost] Sep 29 10:50:41.975: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Sep 29 10:50:41.975: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config patch pod agnhost-primary-wxb9z --namespace=kubectl-2357 -p {"metadata":{"annotations":{"x":"y"}}}' Sep 29 10:50:42.091: INFO: stderr: "" Sep 29 10:50:42.091: INFO: stdout: "pod/agnhost-primary-wxb9z patched\n" STEP: checking annotations Sep 29 10:50:42.108: INFO: Selector matched 1 pods for map[app:agnhost] Sep 29 10:50:42.108: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:50:42.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2357" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":303,"completed":57,"skipped":829,"failed":0} SSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:50:42.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service multi-endpoint-test in namespace services-4401 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4401 to expose endpoints map[] Sep 29 10:50:42.340: INFO: Failed go get Endpoints object: endpoints "multi-endpoint-test" not found Sep 29 10:50:43.348: INFO: successfully validated that service multi-endpoint-test in namespace services-4401 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-4401 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4401 to expose endpoints map[pod1:[100]] Sep 29 10:50:47.538: INFO: successfully validated that service multi-endpoint-test in namespace services-4401 exposes endpoints map[pod1:[100]] STEP: Creating pod pod2 in namespace services-4401 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4401 to expose endpoints map[pod1:[100] pod2:[101]] Sep 29 10:50:50.748: INFO: successfully validated that service multi-endpoint-test in namespace services-4401 exposes endpoints map[pod1:[100] pod2:[101]] STEP: Deleting pod pod1 in namespace services-4401 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4401 to expose endpoints map[pod2:[101]] Sep 29 10:50:50.822: INFO: successfully validated that service multi-endpoint-test in namespace services-4401 exposes endpoints map[pod2:[101]] STEP: Deleting pod pod2 in namespace services-4401 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4401 to expose endpoints map[] Sep 29 10:50:51.852: INFO: successfully validated that service multi-endpoint-test in namespace services-4401 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:50:51.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4401" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:9.777 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":303,"completed":58,"skipped":836,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:50:51.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Sep 29 10:50:56.506: INFO: Successfully updated pod "pod-update-activedeadlineseconds-0843662b-e4b9-4bcf-ba1d-74a2b3c865ee" Sep 29 10:50:56.506: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-0843662b-e4b9-4bcf-ba1d-74a2b3c865ee" in namespace "pods-1424" to be "terminated due to deadline exceeded" Sep 29 10:50:56.511: INFO: Pod "pod-update-activedeadlineseconds-0843662b-e4b9-4bcf-ba1d-74a2b3c865ee": Phase="Running", Reason="", readiness=true. Elapsed: 4.594214ms Sep 29 10:50:58.516: INFO: Pod "pod-update-activedeadlineseconds-0843662b-e4b9-4bcf-ba1d-74a2b3c865ee": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.009316051s Sep 29 10:50:58.516: INFO: Pod "pod-update-activedeadlineseconds-0843662b-e4b9-4bcf-ba1d-74a2b3c865ee" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:50:58.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1424" for this suite. • [SLOW TEST:6.622 seconds] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":303,"completed":59,"skipped":899,"failed":0} SSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:50:58.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-580 [It] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet Sep 29 10:50:58.677: INFO: Found 0 stateful pods, waiting for 3 Sep 29 10:51:08.682: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Sep 29 10:51:08.682: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Sep 29 10:51:08.682: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false Sep 29 10:51:18.686: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Sep 29 10:51:18.686: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Sep 29 10:51:18.686: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Sep 29 10:51:18.694: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-580 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 29 10:51:18.981: INFO: stderr: "I0929 10:51:18.823283 425 log.go:181] (0xc0006454a0) (0xc00063caa0) Create stream\nI0929 10:51:18.823354 425 log.go:181] (0xc0006454a0) (0xc00063caa0) Stream added, broadcasting: 1\nI0929 10:51:18.825956 425 log.go:181] (0xc0006454a0) Reply frame received for 1\nI0929 10:51:18.825993 425 log.go:181] (0xc0006454a0) (0xc00063cb40) Create stream\nI0929 10:51:18.826004 425 log.go:181] (0xc0006454a0) (0xc00063cb40) Stream added, broadcasting: 3\nI0929 10:51:18.826947 425 log.go:181] (0xc0006454a0) Reply frame received for 3\nI0929 10:51:18.827006 425 log.go:181] (0xc0006454a0) (0xc0007d4460) Create stream\nI0929 10:51:18.827046 425 log.go:181] (0xc0006454a0) (0xc0007d4460) Stream added, broadcasting: 5\nI0929 10:51:18.827996 425 log.go:181] (0xc0006454a0) Reply frame received for 5\nI0929 10:51:18.935514 425 log.go:181] (0xc0006454a0) Data frame received for 5\nI0929 10:51:18.935536 425 log.go:181] (0xc0007d4460) (5) Data frame handling\nI0929 10:51:18.935548 425 log.go:181] (0xc0007d4460) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0929 10:51:18.972292 425 log.go:181] (0xc0006454a0) Data frame received for 3\nI0929 10:51:18.972321 425 log.go:181] (0xc00063cb40) (3) Data frame handling\nI0929 10:51:18.972344 425 log.go:181] (0xc00063cb40) (3) Data frame sent\nI0929 10:51:18.972993 425 log.go:181] (0xc0006454a0) Data frame received for 5\nI0929 10:51:18.973015 425 log.go:181] (0xc0007d4460) (5) Data frame handling\nI0929 10:51:18.973039 425 log.go:181] (0xc0006454a0) Data frame received for 3\nI0929 10:51:18.973062 425 log.go:181] (0xc00063cb40) (3) Data frame handling\nI0929 10:51:18.974944 425 log.go:181] (0xc0006454a0) Data frame received for 1\nI0929 10:51:18.974987 425 log.go:181] (0xc00063caa0) (1) Data frame handling\nI0929 10:51:18.975011 425 log.go:181] (0xc00063caa0) (1) Data frame sent\nI0929 10:51:18.975024 425 log.go:181] (0xc0006454a0) (0xc00063caa0) Stream removed, broadcasting: 1\nI0929 10:51:18.975050 425 log.go:181] (0xc0006454a0) Go away received\nI0929 10:51:18.976484 425 log.go:181] (0xc0006454a0) (0xc00063caa0) Stream removed, broadcasting: 1\nI0929 10:51:18.976533 425 log.go:181] (0xc0006454a0) (0xc00063cb40) Stream removed, broadcasting: 3\nI0929 10:51:18.976554 425 log.go:181] (0xc0006454a0) (0xc0007d4460) Stream removed, broadcasting: 5\n" Sep 29 10:51:18.981: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 29 10:51:18.981: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Sep 29 10:51:29.016: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Sep 29 10:51:39.096: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-580 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 29 10:51:39.330: INFO: stderr: "I0929 10:51:39.234963 443 log.go:181] (0xc00093f080) (0xc0000e0960) Create stream\nI0929 10:51:39.235024 443 log.go:181] (0xc00093f080) (0xc0000e0960) Stream added, broadcasting: 1\nI0929 10:51:39.240420 443 log.go:181] (0xc00093f080) Reply frame received for 1\nI0929 10:51:39.240464 443 log.go:181] (0xc00093f080) (0xc0000e1220) Create stream\nI0929 10:51:39.240480 443 log.go:181] (0xc00093f080) (0xc0000e1220) Stream added, broadcasting: 3\nI0929 10:51:39.241844 443 log.go:181] (0xc00093f080) Reply frame received for 3\nI0929 10:51:39.241900 443 log.go:181] (0xc00093f080) (0xc000377900) Create stream\nI0929 10:51:39.241915 443 log.go:181] (0xc00093f080) (0xc000377900) Stream added, broadcasting: 5\nI0929 10:51:39.242844 443 log.go:181] (0xc00093f080) Reply frame received for 5\nI0929 10:51:39.323728 443 log.go:181] (0xc00093f080) Data frame received for 3\nI0929 10:51:39.323798 443 log.go:181] (0xc0000e1220) (3) Data frame handling\nI0929 10:51:39.323825 443 log.go:181] (0xc0000e1220) (3) Data frame sent\nI0929 10:51:39.323845 443 log.go:181] (0xc00093f080) Data frame received for 3\nI0929 10:51:39.323861 443 log.go:181] (0xc0000e1220) (3) Data frame handling\nI0929 10:51:39.323884 443 log.go:181] (0xc00093f080) Data frame received for 5\nI0929 10:51:39.323907 443 log.go:181] (0xc000377900) (5) Data frame handling\nI0929 10:51:39.323945 443 log.go:181] (0xc000377900) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0929 10:51:39.324093 443 log.go:181] (0xc00093f080) Data frame received for 5\nI0929 10:51:39.324121 443 log.go:181] (0xc000377900) (5) Data frame handling\nI0929 10:51:39.325460 443 log.go:181] (0xc00093f080) Data frame received for 1\nI0929 10:51:39.325477 443 log.go:181] (0xc0000e0960) (1) Data frame handling\nI0929 10:51:39.325487 443 log.go:181] (0xc0000e0960) (1) Data frame sent\nI0929 10:51:39.325501 443 log.go:181] (0xc00093f080) (0xc0000e0960) Stream removed, broadcasting: 1\nI0929 10:51:39.325512 443 log.go:181] (0xc00093f080) Go away received\nI0929 10:51:39.325970 443 log.go:181] (0xc00093f080) (0xc0000e0960) Stream removed, broadcasting: 1\nI0929 10:51:39.325992 443 log.go:181] (0xc00093f080) (0xc0000e1220) Stream removed, broadcasting: 3\nI0929 10:51:39.326004 443 log.go:181] (0xc00093f080) (0xc000377900) Stream removed, broadcasting: 5\n" Sep 29 10:51:39.330: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 29 10:51:39.330: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 29 10:51:49.353: INFO: Waiting for StatefulSet statefulset-580/ss2 to complete update Sep 29 10:51:49.353: INFO: Waiting for Pod statefulset-580/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Sep 29 10:51:49.353: INFO: Waiting for Pod statefulset-580/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Sep 29 10:51:59.361: INFO: Waiting for StatefulSet statefulset-580/ss2 to complete update Sep 29 10:51:59.361: INFO: Waiting for Pod statefulset-580/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision Sep 29 10:52:09.362: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-580 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Sep 29 10:52:09.612: INFO: stderr: "I0929 10:52:09.488883 461 log.go:181] (0xc000f3d080) (0xc00045bf40) Create stream\nI0929 10:52:09.488951 461 log.go:181] (0xc000f3d080) (0xc00045bf40) Stream added, broadcasting: 1\nI0929 10:52:09.493178 461 log.go:181] (0xc000f3d080) Reply frame received for 1\nI0929 10:52:09.493224 461 log.go:181] (0xc000f3d080) (0xc00045ab40) Create stream\nI0929 10:52:09.493235 461 log.go:181] (0xc000f3d080) (0xc00045ab40) Stream added, broadcasting: 3\nI0929 10:52:09.494229 461 log.go:181] (0xc000f3d080) Reply frame received for 3\nI0929 10:52:09.494261 461 log.go:181] (0xc000f3d080) (0xc0009f8640) Create stream\nI0929 10:52:09.494270 461 log.go:181] (0xc000f3d080) (0xc0009f8640) Stream added, broadcasting: 5\nI0929 10:52:09.495045 461 log.go:181] (0xc000f3d080) Reply frame received for 5\nI0929 10:52:09.574645 461 log.go:181] (0xc000f3d080) Data frame received for 5\nI0929 10:52:09.574667 461 log.go:181] (0xc0009f8640) (5) Data frame handling\nI0929 10:52:09.574678 461 log.go:181] (0xc0009f8640) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0929 10:52:09.603706 461 log.go:181] (0xc000f3d080) Data frame received for 3\nI0929 10:52:09.603755 461 log.go:181] (0xc00045ab40) (3) Data frame handling\nI0929 10:52:09.603789 461 log.go:181] (0xc00045ab40) (3) Data frame sent\nI0929 10:52:09.603887 461 log.go:181] (0xc000f3d080) Data frame received for 5\nI0929 10:52:09.603907 461 log.go:181] (0xc0009f8640) (5) Data frame handling\nI0929 10:52:09.603967 461 log.go:181] (0xc000f3d080) Data frame received for 3\nI0929 10:52:09.603995 461 log.go:181] (0xc00045ab40) (3) Data frame handling\nI0929 10:52:09.606026 461 log.go:181] (0xc000f3d080) Data frame received for 1\nI0929 10:52:09.606036 461 log.go:181] (0xc00045bf40) (1) Data frame handling\nI0929 10:52:09.606042 461 log.go:181] (0xc00045bf40) (1) Data frame sent\nI0929 10:52:09.606181 461 log.go:181] (0xc000f3d080) (0xc00045bf40) Stream removed, broadcasting: 1\nI0929 10:52:09.606257 461 log.go:181] (0xc000f3d080) Go away received\nI0929 10:52:09.606564 461 log.go:181] (0xc000f3d080) (0xc00045bf40) Stream removed, broadcasting: 1\nI0929 10:52:09.606583 461 log.go:181] (0xc000f3d080) (0xc00045ab40) Stream removed, broadcasting: 3\nI0929 10:52:09.606593 461 log.go:181] (0xc000f3d080) (0xc0009f8640) Stream removed, broadcasting: 5\n" Sep 29 10:52:09.612: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Sep 29 10:52:09.612: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Sep 29 10:52:19.646: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Sep 29 10:52:29.681: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-580 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Sep 29 10:52:29.895: INFO: stderr: "I0929 10:52:29.825171 479 log.go:181] (0xc000dcc000) (0xc000c18000) Create stream\nI0929 10:52:29.825264 479 log.go:181] (0xc000dcc000) (0xc000c18000) Stream added, broadcasting: 1\nI0929 10:52:29.827680 479 log.go:181] (0xc000dcc000) Reply frame received for 1\nI0929 10:52:29.827731 479 log.go:181] (0xc000dcc000) (0xc000898000) Create stream\nI0929 10:52:29.827750 479 log.go:181] (0xc000dcc000) (0xc000898000) Stream added, broadcasting: 3\nI0929 10:52:29.829003 479 log.go:181] (0xc000dcc000) Reply frame received for 3\nI0929 10:52:29.829054 479 log.go:181] (0xc000dcc000) (0xc001002000) Create stream\nI0929 10:52:29.829072 479 log.go:181] (0xc000dcc000) (0xc001002000) Stream added, broadcasting: 5\nI0929 10:52:29.830369 479 log.go:181] (0xc000dcc000) Reply frame received for 5\nI0929 10:52:29.886987 479 log.go:181] (0xc000dcc000) Data frame received for 3\nI0929 10:52:29.887016 479 log.go:181] (0xc000898000) (3) Data frame handling\nI0929 10:52:29.887024 479 log.go:181] (0xc000898000) (3) Data frame sent\nI0929 10:52:29.887030 479 log.go:181] (0xc000dcc000) Data frame received for 3\nI0929 10:52:29.887035 479 log.go:181] (0xc000898000) (3) Data frame handling\nI0929 10:52:29.887070 479 log.go:181] (0xc000dcc000) Data frame received for 5\nI0929 10:52:29.887078 479 log.go:181] (0xc001002000) (5) Data frame handling\nI0929 10:52:29.887084 479 log.go:181] (0xc001002000) (5) Data frame sent\nI0929 10:52:29.887089 479 log.go:181] (0xc000dcc000) Data frame received for 5\nI0929 10:52:29.887094 479 log.go:181] (0xc001002000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0929 10:52:29.889344 479 log.go:181] (0xc000dcc000) Data frame received for 1\nI0929 10:52:29.889381 479 log.go:181] (0xc000c18000) (1) Data frame handling\nI0929 10:52:29.889400 479 log.go:181] (0xc000c18000) (1) Data frame sent\nI0929 10:52:29.889422 479 log.go:181] (0xc000dcc000) (0xc000c18000) Stream removed, broadcasting: 1\nI0929 10:52:29.889626 479 log.go:181] (0xc000dcc000) Go away received\nI0929 10:52:29.889869 479 log.go:181] (0xc000dcc000) (0xc000c18000) Stream removed, broadcasting: 1\nI0929 10:52:29.889890 479 log.go:181] (0xc000dcc000) (0xc000898000) Stream removed, broadcasting: 3\nI0929 10:52:29.889901 479 log.go:181] (0xc000dcc000) (0xc001002000) Stream removed, broadcasting: 5\n" Sep 29 10:52:29.895: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Sep 29 10:52:29.895: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Sep 29 10:52:39.913: INFO: Waiting for StatefulSet statefulset-580/ss2 to complete update Sep 29 10:52:39.913: INFO: Waiting for Pod statefulset-580/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Sep 29 10:52:39.913: INFO: Waiting for Pod statefulset-580/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Sep 29 10:52:49.919: INFO: Waiting for StatefulSet statefulset-580/ss2 to complete update Sep 29 10:52:49.920: INFO: Waiting for Pod statefulset-580/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Sep 29 10:52:59.921: INFO: Waiting for StatefulSet statefulset-580/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Sep 29 10:53:09.922: INFO: Deleting all statefulset in ns statefulset-580 Sep 29 10:53:09.925: INFO: Scaling statefulset ss2 to 0 Sep 29 10:53:29.944: INFO: Waiting for statefulset status.replicas updated to 0 Sep 29 10:53:29.947: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:53:29.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-580" for this suite. • [SLOW TEST:151.453 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":303,"completed":60,"skipped":907,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:53:29.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 29 10:53:30.589: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 29 10:53:32.600: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736973610, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736973610, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736973610, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736973610, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 29 10:53:34.605: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736973610, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736973610, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736973610, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736973610, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 29 10:53:37.638: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:53:38.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3133" for this suite. STEP: Destroying namespace "webhook-3133-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.393 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":303,"completed":61,"skipped":929,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:53:38.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl replace /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1581 [It] should update a single-container pod's image [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Sep 29 10:53:38.522: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-8385' Sep 29 10:53:38.629: INFO: stderr: "" Sep 29 10:53:38.629: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Sep 29 10:53:43.679: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-8385 -o json' Sep 29 10:53:43.779: INFO: stderr: "" Sep 29 10:53:43.779: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-09-29T10:53:38Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl-run\",\n \"operation\": \"Update\",\n \"time\": \"2020-09-29T10:53:38Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.244.1.32\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2020-09-29T10:53:41Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-8385\",\n \"resourceVersion\": \"1598894\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-8385/pods/e2e-test-httpd-pod\",\n \"uid\": \"d175f7c8-9f07-49a7-9f44-996d8a02a1ef\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-252nd\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"kali-worker2\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-252nd\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-252nd\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-09-29T10:53:38Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-09-29T10:53:41Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-09-29T10:53:41Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-09-29T10:53:38Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://635f28974bc50f4e0bed29e2e2faf8dad3b44094a285df6ba70175c961422c60\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-09-29T10:53:40Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.13\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.32\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.32\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-09-29T10:53:38Z\"\n }\n}\n" STEP: replace the image in the pod Sep 29 10:53:43.779: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-8385' Sep 29 10:53:44.087: INFO: stderr: "" Sep 29 10:53:44.087: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1586 Sep 29 10:53:44.090: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-8385' Sep 29 10:53:58.107: INFO: stderr: "" Sep 29 10:53:58.107: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:53:58.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8385" for this suite. • [SLOW TEST:19.740 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1577 should update a single-container pod's image [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":303,"completed":62,"skipped":937,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Ingress API should support creating Ingress API operations [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Ingress API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:53:58.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingress STEP: Waiting for a default service account to be provisioned in namespace [It] should support creating Ingress API operations [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Sep 29 10:53:58.233: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Sep 29 10:53:58.238: INFO: starting watch STEP: patching STEP: updating Sep 29 10:53:58.249: INFO: waiting for watch events with expected annotations Sep 29 10:53:58.249: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: get /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] Ingress API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:53:58.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingress-9544" for this suite. •{"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":303,"completed":63,"skipped":974,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:53:58.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:54:58.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3071" for this suite. • [SLOW TEST:60.143 seconds] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":303,"completed":64,"skipped":1008,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:54:58.558: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:55:14.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8063" for this suite. • [SLOW TEST:16.151 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":303,"completed":65,"skipped":1045,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:55:14.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-7096e21e-d496-487b-878b-7f1b7d135de3 STEP: Creating a pod to test consume configMaps Sep 29 10:55:14.795: INFO: Waiting up to 5m0s for pod "pod-configmaps-5e9e3971-3e18-473d-8474-25ed7a62e417" in namespace "configmap-2150" to be "Succeeded or Failed" Sep 29 10:55:14.815: INFO: Pod "pod-configmaps-5e9e3971-3e18-473d-8474-25ed7a62e417": Phase="Pending", Reason="", readiness=false. Elapsed: 20.008362ms Sep 29 10:55:16.818: INFO: Pod "pod-configmaps-5e9e3971-3e18-473d-8474-25ed7a62e417": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023537027s Sep 29 10:55:18.830: INFO: Pod "pod-configmaps-5e9e3971-3e18-473d-8474-25ed7a62e417": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035342439s STEP: Saw pod success Sep 29 10:55:18.830: INFO: Pod "pod-configmaps-5e9e3971-3e18-473d-8474-25ed7a62e417" satisfied condition "Succeeded or Failed" Sep 29 10:55:18.833: INFO: Trying to get logs from node kali-worker pod pod-configmaps-5e9e3971-3e18-473d-8474-25ed7a62e417 container configmap-volume-test: STEP: delete the pod Sep 29 10:55:18.884: INFO: Waiting for pod pod-configmaps-5e9e3971-3e18-473d-8474-25ed7a62e417 to disappear Sep 29 10:55:18.893: INFO: Pod pod-configmaps-5e9e3971-3e18-473d-8474-25ed7a62e417 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:55:18.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2150" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":66,"skipped":1061,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:55:18.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Sep 29 10:55:23.559: INFO: Successfully updated pod "pod-update-1b3f99bd-eeb2-4bb6-9754-a760e00f5e35" STEP: verifying the updated pod is in kubernetes Sep 29 10:55:23.579: INFO: Pod update OK [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:55:23.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2034" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":303,"completed":67,"skipped":1074,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:55:23.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 29 10:55:24.094: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 29 10:55:26.103: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736973724, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736973724, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736973724, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736973724, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 29 10:55:29.178: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:55:29.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-401" for this suite. STEP: Destroying namespace "webhook-401-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.002 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":303,"completed":68,"skipped":1083,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:55:29.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 29 10:55:29.660: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Sep 29 10:55:32.634: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5048 create -f -' Sep 29 10:55:36.361: INFO: stderr: "" Sep 29 10:55:36.361: INFO: stdout: "e2e-test-crd-publish-openapi-4471-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Sep 29 10:55:36.361: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5048 delete e2e-test-crd-publish-openapi-4471-crds test-cr' Sep 29 10:55:36.481: INFO: stderr: "" Sep 29 10:55:36.481: INFO: stdout: "e2e-test-crd-publish-openapi-4471-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Sep 29 10:55:36.481: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5048 apply -f -' Sep 29 10:55:36.772: INFO: stderr: "" Sep 29 10:55:36.772: INFO: stdout: "e2e-test-crd-publish-openapi-4471-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Sep 29 10:55:36.773: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5048 delete e2e-test-crd-publish-openapi-4471-crds test-cr' Sep 29 10:55:36.887: INFO: stderr: "" Sep 29 10:55:36.887: INFO: stdout: "e2e-test-crd-publish-openapi-4471-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Sep 29 10:55:36.887: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4471-crds' Sep 29 10:55:37.172: INFO: stderr: "" Sep 29 10:55:37.172: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4471-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:55:40.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5048" for this suite. • [SLOW TEST:10.557 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":303,"completed":69,"skipped":1096,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:55:40.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Sep 29 10:55:44.291: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:55:44.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9520" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":303,"completed":70,"skipped":1116,"failed":0} SS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:55:44.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Sep 29 10:55:52.707: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Sep 29 10:55:52.733: INFO: Pod pod-with-poststart-exec-hook still exists Sep 29 10:55:54.733: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Sep 29 10:55:54.738: INFO: Pod pod-with-poststart-exec-hook still exists Sep 29 10:55:56.733: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Sep 29 10:55:56.738: INFO: Pod pod-with-poststart-exec-hook still exists Sep 29 10:55:58.733: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Sep 29 10:55:58.737: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 10:55:58.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9400" for this suite. • [SLOW TEST:14.236 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":303,"completed":71,"skipped":1118,"failed":0} SSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 10:55:58.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-068dc381-a27c-47b4-8d12-2db781282c92 in namespace container-probe-9274 Sep 29 10:56:02.894: INFO: Started pod liveness-068dc381-a27c-47b4-8d12-2db781282c92 in namespace container-probe-9274 STEP: checking the pod's current state and verifying that restartCount is present Sep 29 10:56:02.897: INFO: Initial restart count of pod liveness-068dc381-a27c-47b4-8d12-2db781282c92 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:00:03.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9274" for this suite. • [SLOW TEST:245.018 seconds] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":303,"completed":72,"skipped":1123,"failed":0} S ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:00:03.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:00:04.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2558" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":303,"completed":73,"skipped":1124,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:00:04.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-a9c1b720-58f4-4bfe-809d-97f203c757a8 STEP: Creating a pod to test consume configMaps Sep 29 11:00:04.848: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-08765d07-79bb-4ff3-bf32-b0b3e2c05ce4" in namespace "projected-1141" to be "Succeeded or Failed" Sep 29 11:00:04.851: INFO: Pod "pod-projected-configmaps-08765d07-79bb-4ff3-bf32-b0b3e2c05ce4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.026032ms Sep 29 11:00:06.978: INFO: Pod "pod-projected-configmaps-08765d07-79bb-4ff3-bf32-b0b3e2c05ce4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130092225s Sep 29 11:00:08.981: INFO: Pod "pod-projected-configmaps-08765d07-79bb-4ff3-bf32-b0b3e2c05ce4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.133869115s STEP: Saw pod success Sep 29 11:00:08.982: INFO: Pod "pod-projected-configmaps-08765d07-79bb-4ff3-bf32-b0b3e2c05ce4" satisfied condition "Succeeded or Failed" Sep 29 11:00:08.984: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-08765d07-79bb-4ff3-bf32-b0b3e2c05ce4 container projected-configmap-volume-test: STEP: delete the pod Sep 29 11:00:09.051: INFO: Waiting for pod pod-projected-configmaps-08765d07-79bb-4ff3-bf32-b0b3e2c05ce4 to disappear Sep 29 11:00:09.073: INFO: Pod pod-projected-configmaps-08765d07-79bb-4ff3-bf32-b0b3e2c05ce4 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:00:09.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1141" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":74,"skipped":1133,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:00:09.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] deployment should support rollover [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 29 11:00:09.242: INFO: Pod name rollover-pod: Found 0 pods out of 1 Sep 29 11:00:14.271: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Sep 29 11:00:14.271: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Sep 29 11:00:16.275: INFO: Creating deployment "test-rollover-deployment" Sep 29 11:00:16.288: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Sep 29 11:00:18.294: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Sep 29 11:00:18.300: INFO: Ensure that both replica sets have 1 created replica Sep 29 11:00:18.305: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Sep 29 11:00:18.312: INFO: Updating deployment test-rollover-deployment Sep 29 11:00:18.312: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Sep 29 11:00:20.370: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Sep 29 11:00:20.376: INFO: Make sure deployment "test-rollover-deployment" is complete Sep 29 11:00:20.418: INFO: all replica sets need to contain the pod-template-hash label Sep 29 11:00:20.418: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736974016, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736974016, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736974018, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736974016, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 29 11:00:22.428: INFO: all replica sets need to contain the pod-template-hash label Sep 29 11:00:22.428: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736974016, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736974016, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736974021, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736974016, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 29 11:00:24.427: INFO: all replica sets need to contain the pod-template-hash label Sep 29 11:00:24.427: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736974016, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736974016, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736974021, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736974016, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 29 11:00:26.427: INFO: all replica sets need to contain the pod-template-hash label Sep 29 11:00:26.427: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736974016, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736974016, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736974021, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736974016, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 29 11:00:28.427: INFO: all replica sets need to contain the pod-template-hash label Sep 29 11:00:28.427: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736974016, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736974016, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736974021, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736974016, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 29 11:00:30.427: INFO: all replica sets need to contain the pod-template-hash label Sep 29 11:00:30.427: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736974016, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736974016, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736974021, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736974016, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 29 11:00:32.437: INFO: Sep 29 11:00:32.437: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Sep 29 11:00:32.445: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-9094 /apis/apps/v1/namespaces/deployment-9094/deployments/test-rollover-deployment 27761ef9-fa39-4eb3-92ad-81a7ac245232 1600537 2 2020-09-29 11:00:16 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-09-29 11:00:18 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-09-29 11:00:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002b05bd8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-09-29 11:00:16 +0000 UTC,LastTransitionTime:2020-09-29 11:00:16 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-5797c7764" has successfully progressed.,LastUpdateTime:2020-09-29 11:00:31 +0000 UTC,LastTransitionTime:2020-09-29 11:00:16 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Sep 29 11:00:32.448: INFO: New ReplicaSet "test-rollover-deployment-5797c7764" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-5797c7764 deployment-9094 /apis/apps/v1/namespaces/deployment-9094/replicasets/test-rollover-deployment-5797c7764 071dda26-7280-4e6d-a8c4-f17ef483b544 1600526 2 2020-09-29 11:00:18 +0000 UTC map[name:rollover-pod pod-template-hash:5797c7764] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 27761ef9-fa39-4eb3-92ad-81a7ac245232 0xc0037141e0 0xc0037141e1}] [] [{kube-controller-manager Update apps/v1 2020-09-29 11:00:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"27761ef9-fa39-4eb3-92ad-81a7ac245232\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5797c7764,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:5797c7764] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0037142f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Sep 29 11:00:32.448: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Sep 29 11:00:32.449: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-9094 /apis/apps/v1/namespaces/deployment-9094/replicasets/test-rollover-controller a876e077-b868-4c1b-8c66-da4ba35ba53e 1600536 2 2020-09-29 11:00:09 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 27761ef9-fa39-4eb3-92ad-81a7ac245232 0xc003714017 0xc003714018}] [] [{e2e.test Update apps/v1 2020-09-29 11:00:09 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-09-29 11:00:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"27761ef9-fa39-4eb3-92ad-81a7ac245232\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003714108 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Sep 29 11:00:32.449: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-78bc8b888c deployment-9094 /apis/apps/v1/namespaces/deployment-9094/replicasets/test-rollover-deployment-78bc8b888c c0d58ac9-6c20-46ac-bf9e-a91ccde038eb 1600479 2 2020-09-29 11:00:16 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 27761ef9-fa39-4eb3-92ad-81a7ac245232 0xc003714417 0xc003714418}] [] [{kube-controller-manager Update apps/v1 2020-09-29 11:00:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"27761ef9-fa39-4eb3-92ad-81a7ac245232\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78bc8b888c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0037144b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Sep 29 11:00:32.452: INFO: Pod "test-rollover-deployment-5797c7764-zzrjw" is available: &Pod{ObjectMeta:{test-rollover-deployment-5797c7764-zzrjw test-rollover-deployment-5797c7764- deployment-9094 /api/v1/namespaces/deployment-9094/pods/test-rollover-deployment-5797c7764-zzrjw 36bd5f47-6ef2-4196-9f00-a1efa8bb1500 1600494 0 2020-09-29 11:00:18 +0000 UTC map[name:rollover-pod pod-template-hash:5797c7764] map[] [{apps/v1 ReplicaSet test-rollover-deployment-5797c7764 071dda26-7280-4e6d-a8c4-f17ef483b544 0xc000b1c620 0xc000b1c621}] [] [{kube-controller-manager Update v1 2020-09-29 11:00:18 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"071dda26-7280-4e6d-a8c4-f17ef483b544\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-29 11:00:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.39\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lk2pc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lk2pc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lk2pc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 11:00:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 11:00:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 11:00:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 11:00:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.1.39,StartTime:2020-09-29 11:00:18 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-29 11:00:20 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://50176c4ede9ad49516fcfeaa79a65ea1a463ffd7aee9fe4ee200c448ec57d7c8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.39,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:00:32.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9094" for this suite. • [SLOW TEST:23.361 seconds] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":303,"completed":75,"skipped":1175,"failed":0} [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:00:32.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should run through a ConfigMap lifecycle [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a ConfigMap STEP: fetching the ConfigMap STEP: patching the ConfigMap STEP: listing all ConfigMaps in all namespaces with a label selector STEP: deleting the ConfigMap by collection with a label selector STEP: listing all ConfigMaps in test namespace [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:00:32.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5048" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":303,"completed":76,"skipped":1175,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:00:32.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-6ee725e6-2e8b-4e9d-b538-265498049000 STEP: Creating a pod to test consume secrets Sep 29 11:00:32.715: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b0dcb3e1-b27e-4103-b024-895e7fc6b471" in namespace "projected-4027" to be "Succeeded or Failed" Sep 29 11:00:32.737: INFO: Pod "pod-projected-secrets-b0dcb3e1-b27e-4103-b024-895e7fc6b471": Phase="Pending", Reason="", readiness=false. Elapsed: 22.368578ms Sep 29 11:00:34.741: INFO: Pod "pod-projected-secrets-b0dcb3e1-b27e-4103-b024-895e7fc6b471": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026317955s Sep 29 11:00:36.746: INFO: Pod "pod-projected-secrets-b0dcb3e1-b27e-4103-b024-895e7fc6b471": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03088167s STEP: Saw pod success Sep 29 11:00:36.746: INFO: Pod "pod-projected-secrets-b0dcb3e1-b27e-4103-b024-895e7fc6b471" satisfied condition "Succeeded or Failed" Sep 29 11:00:36.749: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-b0dcb3e1-b27e-4103-b024-895e7fc6b471 container projected-secret-volume-test: STEP: delete the pod Sep 29 11:00:36.781: INFO: Waiting for pod pod-projected-secrets-b0dcb3e1-b27e-4103-b024-895e7fc6b471 to disappear Sep 29 11:00:36.846: INFO: Pod pod-projected-secrets-b0dcb3e1-b27e-4103-b024-895e7fc6b471 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:00:36.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4027" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":77,"skipped":1220,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:00:36.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8349.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8349.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8349.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8349.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 29 11:00:43.024: INFO: File jessie_udp@dns-test-service-3.dns-8349.svc.cluster.local from pod dns-8349/dns-test-d8f3c2f8-3cf7-435f-83f1-46e43a037da8 contains '' instead of 'foo.example.com.' Sep 29 11:00:43.024: INFO: Lookups using dns-8349/dns-test-d8f3c2f8-3cf7-435f-83f1-46e43a037da8 failed for: [jessie_udp@dns-test-service-3.dns-8349.svc.cluster.local] Sep 29 11:00:48.033: INFO: DNS probes using dns-test-d8f3c2f8-3cf7-435f-83f1-46e43a037da8 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8349.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8349.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8349.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8349.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 29 11:00:56.460: INFO: File wheezy_udp@dns-test-service-3.dns-8349.svc.cluster.local from pod dns-8349/dns-test-d97f05f4-c562-4e40-b35a-a8d9ecee3ed2 contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 29 11:00:56.464: INFO: File jessie_udp@dns-test-service-3.dns-8349.svc.cluster.local from pod dns-8349/dns-test-d97f05f4-c562-4e40-b35a-a8d9ecee3ed2 contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 29 11:00:56.464: INFO: Lookups using dns-8349/dns-test-d97f05f4-c562-4e40-b35a-a8d9ecee3ed2 failed for: [wheezy_udp@dns-test-service-3.dns-8349.svc.cluster.local jessie_udp@dns-test-service-3.dns-8349.svc.cluster.local] Sep 29 11:01:01.469: INFO: File wheezy_udp@dns-test-service-3.dns-8349.svc.cluster.local from pod dns-8349/dns-test-d97f05f4-c562-4e40-b35a-a8d9ecee3ed2 contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 29 11:01:01.473: INFO: File jessie_udp@dns-test-service-3.dns-8349.svc.cluster.local from pod dns-8349/dns-test-d97f05f4-c562-4e40-b35a-a8d9ecee3ed2 contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 29 11:01:01.473: INFO: Lookups using dns-8349/dns-test-d97f05f4-c562-4e40-b35a-a8d9ecee3ed2 failed for: [wheezy_udp@dns-test-service-3.dns-8349.svc.cluster.local jessie_udp@dns-test-service-3.dns-8349.svc.cluster.local] Sep 29 11:01:06.469: INFO: File wheezy_udp@dns-test-service-3.dns-8349.svc.cluster.local from pod dns-8349/dns-test-d97f05f4-c562-4e40-b35a-a8d9ecee3ed2 contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 29 11:01:06.473: INFO: File jessie_udp@dns-test-service-3.dns-8349.svc.cluster.local from pod dns-8349/dns-test-d97f05f4-c562-4e40-b35a-a8d9ecee3ed2 contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 29 11:01:06.473: INFO: Lookups using dns-8349/dns-test-d97f05f4-c562-4e40-b35a-a8d9ecee3ed2 failed for: [wheezy_udp@dns-test-service-3.dns-8349.svc.cluster.local jessie_udp@dns-test-service-3.dns-8349.svc.cluster.local] Sep 29 11:01:11.474: INFO: File wheezy_udp@dns-test-service-3.dns-8349.svc.cluster.local from pod dns-8349/dns-test-d97f05f4-c562-4e40-b35a-a8d9ecee3ed2 contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 29 11:01:11.518: INFO: File jessie_udp@dns-test-service-3.dns-8349.svc.cluster.local from pod dns-8349/dns-test-d97f05f4-c562-4e40-b35a-a8d9ecee3ed2 contains 'foo.example.com. ' instead of 'bar.example.com.' Sep 29 11:01:11.518: INFO: Lookups using dns-8349/dns-test-d97f05f4-c562-4e40-b35a-a8d9ecee3ed2 failed for: [wheezy_udp@dns-test-service-3.dns-8349.svc.cluster.local jessie_udp@dns-test-service-3.dns-8349.svc.cluster.local] Sep 29 11:01:16.473: INFO: DNS probes using dns-test-d97f05f4-c562-4e40-b35a-a8d9ecee3ed2 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8349.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-8349.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8349.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-8349.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 29 11:01:23.193: INFO: DNS probes using dns-test-8be7956a-2d1a-4f25-9d8b-a50da3598c3d succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:01:23.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8349" for this suite. • [SLOW TEST:46.437 seconds] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":303,"completed":78,"skipped":1236,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:01:23.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 29 11:01:23.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Sep 29 11:01:24.243: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-09-29T11:01:24Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-09-29T11:01:24Z]] name:name1 resourceVersion:1600846 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:ad3d8c32-7d4d-4812-970e-aadf41caf133] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Sep 29 11:01:34.250: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-09-29T11:01:34Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-09-29T11:01:34Z]] name:name2 resourceVersion:1600922 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:b13c87c6-c5fa-46a0-8730-35e2b17a57ef] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Sep 29 11:01:44.279: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-09-29T11:01:24Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-09-29T11:01:44Z]] name:name1 resourceVersion:1600952 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:ad3d8c32-7d4d-4812-970e-aadf41caf133] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Sep 29 11:01:54.287: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-09-29T11:01:34Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-09-29T11:01:54Z]] name:name2 resourceVersion:1600980 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:b13c87c6-c5fa-46a0-8730-35e2b17a57ef] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Sep 29 11:02:04.297: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-09-29T11:01:24Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-09-29T11:01:44Z]] name:name1 resourceVersion:1601010 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:ad3d8c32-7d4d-4812-970e-aadf41caf133] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Sep 29 11:02:14.306: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-09-29T11:01:34Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-09-29T11:01:54Z]] name:name2 resourceVersion:1601040 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:b13c87c6-c5fa-46a0-8730-35e2b17a57ef] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:02:24.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-8993" for this suite. • [SLOW TEST:61.519 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":303,"completed":79,"skipped":1266,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:02:24.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Sep 29 11:02:25.564: INFO: Pod name wrapped-volume-race-4ad2622d-259e-4662-94b6-b80ba6cc61e4: Found 0 pods out of 5 Sep 29 11:02:30.573: INFO: Pod name wrapped-volume-race-4ad2622d-259e-4662-94b6-b80ba6cc61e4: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-4ad2622d-259e-4662-94b6-b80ba6cc61e4 in namespace emptydir-wrapper-4633, will wait for the garbage collector to delete the pods Sep 29 11:02:46.700: INFO: Deleting ReplicationController wrapped-volume-race-4ad2622d-259e-4662-94b6-b80ba6cc61e4 took: 8.490799ms Sep 29 11:02:47.101: INFO: Terminating ReplicationController wrapped-volume-race-4ad2622d-259e-4662-94b6-b80ba6cc61e4 pods took: 400.219917ms STEP: Creating RC which spawns configmap-volume pods Sep 29 11:02:58.961: INFO: Pod name wrapped-volume-race-d37fe7f7-9eb5-4f4d-8baa-72b76c62d7eb: Found 0 pods out of 5 Sep 29 11:03:03.971: INFO: Pod name wrapped-volume-race-d37fe7f7-9eb5-4f4d-8baa-72b76c62d7eb: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-d37fe7f7-9eb5-4f4d-8baa-72b76c62d7eb in namespace emptydir-wrapper-4633, will wait for the garbage collector to delete the pods Sep 29 11:03:18.147: INFO: Deleting ReplicationController wrapped-volume-race-d37fe7f7-9eb5-4f4d-8baa-72b76c62d7eb took: 7.464865ms Sep 29 11:03:18.547: INFO: Terminating ReplicationController wrapped-volume-race-d37fe7f7-9eb5-4f4d-8baa-72b76c62d7eb pods took: 400.210416ms STEP: Creating RC which spawns configmap-volume pods Sep 29 11:03:28.532: INFO: Pod name wrapped-volume-race-c94da774-425d-47e6-8ad6-b9087f7d2e18: Found 1 pods out of 5 Sep 29 11:03:33.542: INFO: Pod name wrapped-volume-race-c94da774-425d-47e6-8ad6-b9087f7d2e18: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-c94da774-425d-47e6-8ad6-b9087f7d2e18 in namespace emptydir-wrapper-4633, will wait for the garbage collector to delete the pods Sep 29 11:03:47.658: INFO: Deleting ReplicationController wrapped-volume-race-c94da774-425d-47e6-8ad6-b9087f7d2e18 took: 7.741297ms Sep 29 11:03:48.058: INFO: Terminating ReplicationController wrapped-volume-race-c94da774-425d-47e6-8ad6-b9087f7d2e18 pods took: 400.187082ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:03:58.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-4633" for this suite. • [SLOW TEST:94.035 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":303,"completed":80,"skipped":1292,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:03:58.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-fa09f446-5100-4bc2-a9a4-20b195f36544 STEP: Creating configMap with name cm-test-opt-upd-99161c4c-6ce1-43b3-9d68-2d6b2ccf3097 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-fa09f446-5100-4bc2-a9a4-20b195f36544 STEP: Updating configmap cm-test-opt-upd-99161c4c-6ce1-43b3-9d68-2d6b2ccf3097 STEP: Creating configMap with name cm-test-opt-create-7fb50e19-1f62-468f-9807-481a627e5678 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:05:31.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1660" for this suite. • [SLOW TEST:92.719 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":81,"skipped":1311,"failed":0} SS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:05:31.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 29 11:05:31.684: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-f985836a-f13c-48f0-bcd7-9b36d8538eb9" in namespace "security-context-test-528" to be "Succeeded or Failed" Sep 29 11:05:31.694: INFO: Pod "alpine-nnp-false-f985836a-f13c-48f0-bcd7-9b36d8538eb9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.35569ms Sep 29 11:05:33.697: INFO: Pod "alpine-nnp-false-f985836a-f13c-48f0-bcd7-9b36d8538eb9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013682213s Sep 29 11:05:35.702: INFO: Pod "alpine-nnp-false-f985836a-f13c-48f0-bcd7-9b36d8538eb9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018707312s Sep 29 11:05:37.731: INFO: Pod "alpine-nnp-false-f985836a-f13c-48f0-bcd7-9b36d8538eb9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047661883s Sep 29 11:05:39.735: INFO: Pod "alpine-nnp-false-f985836a-f13c-48f0-bcd7-9b36d8538eb9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.051141066s Sep 29 11:05:39.735: INFO: Pod "alpine-nnp-false-f985836a-f13c-48f0-bcd7-9b36d8538eb9" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:05:39.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-528" for this suite. • [SLOW TEST:8.178 seconds] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when creating containers with AllowPrivilegeEscalation /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":82,"skipped":1313,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:05:39.761: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 29 11:05:39.870: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8838' Sep 29 11:05:43.046: INFO: stderr: "" Sep 29 11:05:43.046: INFO: stdout: "replicationcontroller/agnhost-primary created\n" Sep 29 11:05:43.046: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8838' Sep 29 11:05:43.358: INFO: stderr: "" Sep 29 11:05:43.358: INFO: stdout: "service/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Sep 29 11:05:44.364: INFO: Selector matched 1 pods for map[app:agnhost] Sep 29 11:05:44.364: INFO: Found 0 / 1 Sep 29 11:05:45.362: INFO: Selector matched 1 pods for map[app:agnhost] Sep 29 11:05:45.362: INFO: Found 0 / 1 Sep 29 11:05:46.364: INFO: Selector matched 1 pods for map[app:agnhost] Sep 29 11:05:46.364: INFO: Found 1 / 1 Sep 29 11:05:46.364: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Sep 29 11:05:46.367: INFO: Selector matched 1 pods for map[app:agnhost] Sep 29 11:05:46.367: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Sep 29 11:05:46.367: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config describe pod agnhost-primary-znlwl --namespace=kubectl-8838' Sep 29 11:05:46.486: INFO: stderr: "" Sep 29 11:05:46.486: INFO: stdout: "Name: agnhost-primary-znlwl\nNamespace: kubectl-8838\nPriority: 0\nNode: kali-worker2/172.18.0.13\nStart Time: Tue, 29 Sep 2020 11:05:43 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: \nStatus: Running\nIP: 10.244.1.54\nIPs:\n IP: 10.244.1.54\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: containerd://f202cf50008b04cedfdf67dc7f1e1380e31ddfaa79438bfa38c37dc6b2340fa0\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.20\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Tue, 29 Sep 2020 11:05:45 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-9lxnd (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-9lxnd:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-9lxnd\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 3s default-scheduler Successfully assigned kubectl-8838/agnhost-primary-znlwl to kali-worker2\n Normal Pulled 2s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.20\" already present on machine\n Normal Created 1s kubelet Created container agnhost-primary\n Normal Started 1s kubelet Started container agnhost-primary\n" Sep 29 11:05:46.486: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config describe rc agnhost-primary --namespace=kubectl-8838' Sep 29 11:05:46.617: INFO: stderr: "" Sep 29 11:05:46.617: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-8838\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.20\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 3s replication-controller Created pod: agnhost-primary-znlwl\n" Sep 29 11:05:46.617: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config describe service agnhost-primary --namespace=kubectl-8838' Sep 29 11:05:46.720: INFO: stderr: "" Sep 29 11:05:46.720: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-8838\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP: 10.105.48.69\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.54:6379\nSession Affinity: None\nEvents: \n" Sep 29 11:05:46.724: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config describe node kali-control-plane' Sep 29 11:05:46.858: INFO: stderr: "" Sep 29 11:05:46.858: INFO: stdout: "Name: kali-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=kali-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Wed, 23 Sep 2020 08:28:40 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: kali-control-plane\n AcquireTime: \n RenewTime: Tue, 29 Sep 2020 11:05:39 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Tue, 29 Sep 2020 11:01:27 +0000 Wed, 23 Sep 2020 08:28:40 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Tue, 29 Sep 2020 11:01:27 +0000 Wed, 23 Sep 2020 08:28:40 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Tue, 29 Sep 2020 11:01:27 +0000 Wed, 23 Sep 2020 08:28:40 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Tue, 29 Sep 2020 11:01:27 +0000 Wed, 23 Sep 2020 08:29:09 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.11\n Hostname: kali-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759868Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759868Ki\n pods: 110\nSystem Info:\n Machine ID: f18d6a3b53c14eaca999fce1081671aa\n System UUID: e919c2db-6960-4f78-a4d1-1e39795c20e3\n Boot ID: b267d78b-f69b-4338-80e8-3f4944338e5d\n Kernel Version: 4.15.0-118-generic\n OS Image: Ubuntu Groovy Gorilla (development branch)\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.4.0\n Kubelet Version: v1.19.0\n Kube-Proxy Version: v1.19.0\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-f9fd979d6-6cvzb 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 6d2h\n kube-system coredns-f9fd979d6-zzb7k 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 6d2h\n kube-system etcd-kali-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6d2h\n kube-system kindnet-mx6h2 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 6d2h\n kube-system kube-apiserver-kali-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 6d2h\n kube-system kube-controller-manager-kali-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 6d2h\n kube-system kube-proxy-x4lnq 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6d2h\n kube-system kube-scheduler-kali-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 6d2h\n local-path-storage local-path-provisioner-78776bfc44-sm58q 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6d2h\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Sep 29 11:05:46.858: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config describe namespace kubectl-8838' Sep 29 11:05:46.970: INFO: stderr: "" Sep 29 11:05:46.970: INFO: stdout: "Name: kubectl-8838\nLabels: e2e-framework=kubectl\n e2e-run=4c389a24-f053-434f-9b2e-b565abdb321c\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:05:46.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8838" for this suite. • [SLOW TEST:7.218 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1105 should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":303,"completed":83,"skipped":1317,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] LimitRange /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:05:46.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Sep 29 11:05:47.092: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Sep 29 11:05:47.101: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Sep 29 11:05:47.101: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Sep 29 11:05:47.107: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Sep 29 11:05:47.107: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Sep 29 11:05:47.174: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Sep 29 11:05:47.175: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Sep 29 11:05:54.731: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:05:54.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-1781" for this suite. • [SLOW TEST:7.789 seconds] [sig-scheduling] LimitRange /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":303,"completed":84,"skipped":1350,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:05:54.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 29 11:05:54.912: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7b5d2da4-a6b5-4f9f-8252-2280ac023a60" in namespace "projected-4580" to be "Succeeded or Failed" Sep 29 11:05:54.920: INFO: Pod "downwardapi-volume-7b5d2da4-a6b5-4f9f-8252-2280ac023a60": Phase="Pending", Reason="", readiness=false. Elapsed: 8.598924ms Sep 29 11:05:56.948: INFO: Pod "downwardapi-volume-7b5d2da4-a6b5-4f9f-8252-2280ac023a60": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036109855s Sep 29 11:05:58.952: INFO: Pod "downwardapi-volume-7b5d2da4-a6b5-4f9f-8252-2280ac023a60": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040781928s Sep 29 11:06:00.955: INFO: Pod "downwardapi-volume-7b5d2da4-a6b5-4f9f-8252-2280ac023a60": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.043815929s STEP: Saw pod success Sep 29 11:06:00.956: INFO: Pod "downwardapi-volume-7b5d2da4-a6b5-4f9f-8252-2280ac023a60" satisfied condition "Succeeded or Failed" Sep 29 11:06:00.958: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-7b5d2da4-a6b5-4f9f-8252-2280ac023a60 container client-container: STEP: delete the pod Sep 29 11:06:01.258: INFO: Waiting for pod downwardapi-volume-7b5d2da4-a6b5-4f9f-8252-2280ac023a60 to disappear Sep 29 11:06:01.499: INFO: Pod downwardapi-volume-7b5d2da4-a6b5-4f9f-8252-2280ac023a60 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:06:01.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4580" for this suite. • [SLOW TEST:6.739 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":303,"completed":85,"skipped":1383,"failed":0} SS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:06:01.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Sep 29 11:06:01.698: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the sample API server. Sep 29 11:06:02.139: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Sep 29 11:06:04.506: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736974362, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736974362, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736974362, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736974362, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67dc674868\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 29 11:06:07.240: INFO: Waited 720.980347ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:06:07.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-6636" for this suite. • [SLOW TEST:6.394 seconds] [sig-api-machinery] Aggregator /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":303,"completed":86,"skipped":1385,"failed":0} SSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:06:07.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override command Sep 29 11:06:08.322: INFO: Waiting up to 5m0s for pod "client-containers-36c238cd-57f8-41c4-88df-4bb82c491d44" in namespace "containers-4120" to be "Succeeded or Failed" Sep 29 11:06:08.500: INFO: Pod "client-containers-36c238cd-57f8-41c4-88df-4bb82c491d44": Phase="Pending", Reason="", readiness=false. Elapsed: 177.373876ms Sep 29 11:06:10.505: INFO: Pod "client-containers-36c238cd-57f8-41c4-88df-4bb82c491d44": Phase="Pending", Reason="", readiness=false. Elapsed: 2.182390837s Sep 29 11:06:12.508: INFO: Pod "client-containers-36c238cd-57f8-41c4-88df-4bb82c491d44": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.185522829s STEP: Saw pod success Sep 29 11:06:12.508: INFO: Pod "client-containers-36c238cd-57f8-41c4-88df-4bb82c491d44" satisfied condition "Succeeded or Failed" Sep 29 11:06:12.510: INFO: Trying to get logs from node kali-worker pod client-containers-36c238cd-57f8-41c4-88df-4bb82c491d44 container test-container: STEP: delete the pod Sep 29 11:06:12.558: INFO: Waiting for pod client-containers-36c238cd-57f8-41c4-88df-4bb82c491d44 to disappear Sep 29 11:06:12.565: INFO: Pod client-containers-36c238cd-57f8-41c4-88df-4bb82c491d44 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:06:12.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4120" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":303,"completed":87,"skipped":1389,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:06:12.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 29 11:06:12.687: INFO: Waiting up to 5m0s for pod "downwardapi-volume-883d89a3-76bc-4f04-8824-4832a4d58cd3" in namespace "downward-api-4736" to be "Succeeded or Failed" Sep 29 11:06:12.706: INFO: Pod "downwardapi-volume-883d89a3-76bc-4f04-8824-4832a4d58cd3": Phase="Pending", Reason="", readiness=false. Elapsed: 18.67939ms Sep 29 11:06:14.712: INFO: Pod "downwardapi-volume-883d89a3-76bc-4f04-8824-4832a4d58cd3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024529208s Sep 29 11:06:16.716: INFO: Pod "downwardapi-volume-883d89a3-76bc-4f04-8824-4832a4d58cd3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029029486s STEP: Saw pod success Sep 29 11:06:16.717: INFO: Pod "downwardapi-volume-883d89a3-76bc-4f04-8824-4832a4d58cd3" satisfied condition "Succeeded or Failed" Sep 29 11:06:16.720: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-883d89a3-76bc-4f04-8824-4832a4d58cd3 container client-container: STEP: delete the pod Sep 29 11:06:16.782: INFO: Waiting for pod downwardapi-volume-883d89a3-76bc-4f04-8824-4832a4d58cd3 to disappear Sep 29 11:06:16.786: INFO: Pod downwardapi-volume-883d89a3-76bc-4f04-8824-4832a4d58cd3 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:06:16.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4736" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":88,"skipped":1410,"failed":0} SSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:06:16.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-71df481f-a27c-4307-bcd3-3159d0495850 in namespace container-probe-491 Sep 29 11:06:20.861: INFO: Started pod liveness-71df481f-a27c-4307-bcd3-3159d0495850 in namespace container-probe-491 STEP: checking the pod's current state and verifying that restartCount is present Sep 29 11:06:20.863: INFO: Initial restart count of pod liveness-71df481f-a27c-4307-bcd3-3159d0495850 is 0 Sep 29 11:06:37.073: INFO: Restart count of pod container-probe-491/liveness-71df481f-a27c-4307-bcd3-3159d0495850 is now 1 (16.210452211s elapsed) Sep 29 11:06:57.119: INFO: Restart count of pod container-probe-491/liveness-71df481f-a27c-4307-bcd3-3159d0495850 is now 2 (36.256024758s elapsed) Sep 29 11:07:17.225: INFO: Restart count of pod container-probe-491/liveness-71df481f-a27c-4307-bcd3-3159d0495850 is now 3 (56.361951796s elapsed) Sep 29 11:07:37.270: INFO: Restart count of pod container-probe-491/liveness-71df481f-a27c-4307-bcd3-3159d0495850 is now 4 (1m16.406939685s elapsed) Sep 29 11:08:37.415: INFO: Restart count of pod container-probe-491/liveness-71df481f-a27c-4307-bcd3-3159d0495850 is now 5 (2m16.551884739s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:08:37.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-491" for this suite. • [SLOW TEST:140.667 seconds] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":303,"completed":89,"skipped":1414,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:08:37.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if kubectl can dry-run update Pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Sep 29 11:08:37.562: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-1717' Sep 29 11:08:37.750: INFO: stderr: "" Sep 29 11:08:37.750: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: replace the image in the pod with server-side dry-run Sep 29 11:08:37.750: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod -o json --namespace=kubectl-1717' Sep 29 11:08:37.864: INFO: stderr: "" Sep 29 11:08:37.864: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-09-29T11:08:37Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl-run\",\n \"operation\": \"Update\",\n \"time\": \"2020-09-29T11:08:37Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-1717\",\n \"resourceVersion\": \"1603367\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-1717/pods/e2e-test-httpd-pod\",\n \"uid\": \"7c7153bc-fe9a-4371-abef-995577c67ed8\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-q6xm9\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"kali-worker\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-q6xm9\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-q6xm9\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-09-29T11:08:37Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"phase\": \"Pending\",\n \"qosClass\": \"BestEffort\"\n }\n}\n" Sep 29 11:08:37.865: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config replace -f - --dry-run server --namespace=kubectl-1717' Sep 29 11:08:38.329: INFO: stderr: "W0929 11:08:37.927435 824 helpers.go:553] --dry-run is deprecated and can be replaced with --dry-run=client.\n" Sep 29 11:08:38.329: INFO: stdout: "pod/e2e-test-httpd-pod replaced (dry run)\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/httpd:2.4.38-alpine Sep 29 11:08:38.332: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-1717' Sep 29 11:08:48.651: INFO: stderr: "" Sep 29 11:08:48.651: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:08:48.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1717" for this suite. • [SLOW TEST:11.229 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl server-side dry-run /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:919 should check if kubectl can dry-run update Pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":303,"completed":90,"skipped":1451,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:08:48.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Sep 29 11:08:55.335: INFO: Successfully updated pod "adopt-release-8gdt5" STEP: Checking that the Job readopts the Pod Sep 29 11:08:55.335: INFO: Waiting up to 15m0s for pod "adopt-release-8gdt5" in namespace "job-4993" to be "adopted" Sep 29 11:08:55.361: INFO: Pod "adopt-release-8gdt5": Phase="Running", Reason="", readiness=true. Elapsed: 26.455521ms Sep 29 11:08:57.366: INFO: Pod "adopt-release-8gdt5": Phase="Running", Reason="", readiness=true. Elapsed: 2.031460774s Sep 29 11:08:57.366: INFO: Pod "adopt-release-8gdt5" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Sep 29 11:08:57.878: INFO: Successfully updated pod "adopt-release-8gdt5" STEP: Checking that the Job releases the Pod Sep 29 11:08:57.878: INFO: Waiting up to 15m0s for pod "adopt-release-8gdt5" in namespace "job-4993" to be "released" Sep 29 11:08:57.895: INFO: Pod "adopt-release-8gdt5": Phase="Running", Reason="", readiness=true. Elapsed: 17.177779ms Sep 29 11:08:59.899: INFO: Pod "adopt-release-8gdt5": Phase="Running", Reason="", readiness=true. Elapsed: 2.021132234s Sep 29 11:08:59.899: INFO: Pod "adopt-release-8gdt5" satisfied condition "released" [AfterEach] [sig-apps] Job /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:08:59.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-4993" for this suite. • [SLOW TEST:11.216 seconds] [sig-apps] Job /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":303,"completed":91,"skipped":1466,"failed":0} SSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:08:59.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Sep 29 11:09:00.204: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5443 /api/v1/namespaces/watch-5443/configmaps/e2e-watch-test-watch-closed e232612a-038a-4718-87c3-82e5ee34ac1e 1603501 0 2020-09-29 11:09:00 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-09-29 11:09:00 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Sep 29 11:09:00.204: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5443 /api/v1/namespaces/watch-5443/configmaps/e2e-watch-test-watch-closed e232612a-038a-4718-87c3-82e5ee34ac1e 1603502 0 2020-09-29 11:09:00 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-09-29 11:09:00 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Sep 29 11:09:00.294: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5443 /api/v1/namespaces/watch-5443/configmaps/e2e-watch-test-watch-closed e232612a-038a-4718-87c3-82e5ee34ac1e 1603504 0 2020-09-29 11:09:00 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-09-29 11:09:00 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Sep 29 11:09:00.294: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5443 /api/v1/namespaces/watch-5443/configmaps/e2e-watch-test-watch-closed e232612a-038a-4718-87c3-82e5ee34ac1e 1603505 0 2020-09-29 11:09:00 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-09-29 11:09:00 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:09:00.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5443" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":303,"completed":92,"skipped":1470,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:09:00.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:09:00.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-3196" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":303,"completed":93,"skipped":1502,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:09:00.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-rnbs STEP: Creating a pod to test atomic-volume-subpath Sep 29 11:09:00.543: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-rnbs" in namespace "subpath-5178" to be "Succeeded or Failed" Sep 29 11:09:00.546: INFO: Pod "pod-subpath-test-configmap-rnbs": Phase="Pending", Reason="", readiness=false. Elapsed: 3.423168ms Sep 29 11:09:02.550: INFO: Pod "pod-subpath-test-configmap-rnbs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00730938s Sep 29 11:09:04.555: INFO: Pod "pod-subpath-test-configmap-rnbs": Phase="Running", Reason="", readiness=true. Elapsed: 4.011940727s Sep 29 11:09:06.559: INFO: Pod "pod-subpath-test-configmap-rnbs": Phase="Running", Reason="", readiness=true. Elapsed: 6.016347616s Sep 29 11:09:08.564: INFO: Pod "pod-subpath-test-configmap-rnbs": Phase="Running", Reason="", readiness=true. Elapsed: 8.021088532s Sep 29 11:09:10.569: INFO: Pod "pod-subpath-test-configmap-rnbs": Phase="Running", Reason="", readiness=true. Elapsed: 10.025436256s Sep 29 11:09:12.573: INFO: Pod "pod-subpath-test-configmap-rnbs": Phase="Running", Reason="", readiness=true. Elapsed: 12.030216248s Sep 29 11:09:14.578: INFO: Pod "pod-subpath-test-configmap-rnbs": Phase="Running", Reason="", readiness=true. Elapsed: 14.035356229s Sep 29 11:09:16.584: INFO: Pod "pod-subpath-test-configmap-rnbs": Phase="Running", Reason="", readiness=true. Elapsed: 16.040927346s Sep 29 11:09:18.589: INFO: Pod "pod-subpath-test-configmap-rnbs": Phase="Running", Reason="", readiness=true. Elapsed: 18.046120435s Sep 29 11:09:20.601: INFO: Pod "pod-subpath-test-configmap-rnbs": Phase="Running", Reason="", readiness=true. Elapsed: 20.057554419s Sep 29 11:09:22.606: INFO: Pod "pod-subpath-test-configmap-rnbs": Phase="Running", Reason="", readiness=true. Elapsed: 22.062556983s Sep 29 11:09:24.609: INFO: Pod "pod-subpath-test-configmap-rnbs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.066338116s STEP: Saw pod success Sep 29 11:09:24.609: INFO: Pod "pod-subpath-test-configmap-rnbs" satisfied condition "Succeeded or Failed" Sep 29 11:09:24.612: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-configmap-rnbs container test-container-subpath-configmap-rnbs: STEP: delete the pod Sep 29 11:09:24.658: INFO: Waiting for pod pod-subpath-test-configmap-rnbs to disappear Sep 29 11:09:24.667: INFO: Pod pod-subpath-test-configmap-rnbs no longer exists STEP: Deleting pod pod-subpath-test-configmap-rnbs Sep 29 11:09:24.667: INFO: Deleting pod "pod-subpath-test-configmap-rnbs" in namespace "subpath-5178" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:09:24.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5178" for this suite. • [SLOW TEST:24.297 seconds] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":303,"completed":94,"skipped":1547,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:09:24.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:09:28.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-265" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":303,"completed":95,"skipped":1561,"failed":0} SSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:09:28.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with configMap that has name projected-configmap-test-upd-8b05f13e-e913-47dd-80eb-53f223efa796 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-8b05f13e-e913-47dd-80eb-53f223efa796 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:10:45.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1201" for this suite. • [SLOW TEST:76.446 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":96,"skipped":1564,"failed":0} SSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:10:45.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Sep 29 11:10:49.342: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:10:49.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5820" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":303,"completed":97,"skipped":1567,"failed":0} S ------------------------------ [k8s.io] Pods should delete a collection of pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:10:49.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should delete a collection of pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of pods Sep 29 11:10:49.484: INFO: created test-pod-1 Sep 29 11:10:49.494: INFO: created test-pod-2 Sep 29 11:10:49.536: INFO: created test-pod-3 STEP: waiting for all 3 pods to be located STEP: waiting for all pods to be deleted [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:10:49.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-686" for this suite. •{"msg":"PASSED [k8s.io] Pods should delete a collection of pods [Conformance]","total":303,"completed":98,"skipped":1568,"failed":0} ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:10:49.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 29 11:10:49.825: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Sep 29 11:10:52.789: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1338 create -f -' Sep 29 11:10:56.234: INFO: stderr: "" Sep 29 11:10:56.235: INFO: stdout: "e2e-test-crd-publish-openapi-3450-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Sep 29 11:10:56.235: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1338 delete e2e-test-crd-publish-openapi-3450-crds test-cr' Sep 29 11:10:56.357: INFO: stderr: "" Sep 29 11:10:56.357: INFO: stdout: "e2e-test-crd-publish-openapi-3450-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Sep 29 11:10:56.357: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1338 apply -f -' Sep 29 11:10:56.658: INFO: stderr: "" Sep 29 11:10:56.658: INFO: stdout: "e2e-test-crd-publish-openapi-3450-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Sep 29 11:10:56.658: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1338 delete e2e-test-crd-publish-openapi-3450-crds test-cr' Sep 29 11:10:56.764: INFO: stderr: "" Sep 29 11:10:56.764: INFO: stdout: "e2e-test-crd-publish-openapi-3450-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Sep 29 11:10:56.765: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3450-crds' Sep 29 11:10:57.018: INFO: stderr: "" Sep 29 11:10:57.018: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3450-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:10:59.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1338" for this suite. • [SLOW TEST:10.217 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":303,"completed":99,"skipped":1568,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:10:59.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-97436487-088c-4115-9bc2-83cf2c20eff4 STEP: Creating a pod to test consume secrets Sep 29 11:11:00.098: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b5ad7a14-a92d-4a5f-849d-12aca9bd9755" in namespace "projected-2828" to be "Succeeded or Failed" Sep 29 11:11:00.122: INFO: Pod "pod-projected-secrets-b5ad7a14-a92d-4a5f-849d-12aca9bd9755": Phase="Pending", Reason="", readiness=false. Elapsed: 24.074303ms Sep 29 11:11:02.155: INFO: Pod "pod-projected-secrets-b5ad7a14-a92d-4a5f-849d-12aca9bd9755": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057350155s Sep 29 11:11:04.159: INFO: Pod "pod-projected-secrets-b5ad7a14-a92d-4a5f-849d-12aca9bd9755": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061568141s STEP: Saw pod success Sep 29 11:11:04.159: INFO: Pod "pod-projected-secrets-b5ad7a14-a92d-4a5f-849d-12aca9bd9755" satisfied condition "Succeeded or Failed" Sep 29 11:11:04.162: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-b5ad7a14-a92d-4a5f-849d-12aca9bd9755 container projected-secret-volume-test: STEP: delete the pod Sep 29 11:11:04.209: INFO: Waiting for pod pod-projected-secrets-b5ad7a14-a92d-4a5f-849d-12aca9bd9755 to disappear Sep 29 11:11:04.257: INFO: Pod pod-projected-secrets-b5ad7a14-a92d-4a5f-849d-12aca9bd9755 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:11:04.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2828" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":100,"skipped":1583,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:11:04.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-downwardapi-wwjs STEP: Creating a pod to test atomic-volume-subpath Sep 29 11:11:04.443: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-wwjs" in namespace "subpath-4654" to be "Succeeded or Failed" Sep 29 11:11:04.461: INFO: Pod "pod-subpath-test-downwardapi-wwjs": Phase="Pending", Reason="", readiness=false. Elapsed: 18.209445ms Sep 29 11:11:06.466: INFO: Pod "pod-subpath-test-downwardapi-wwjs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023437771s Sep 29 11:11:08.470: INFO: Pod "pod-subpath-test-downwardapi-wwjs": Phase="Running", Reason="", readiness=true. Elapsed: 4.027374336s Sep 29 11:11:10.476: INFO: Pod "pod-subpath-test-downwardapi-wwjs": Phase="Running", Reason="", readiness=true. Elapsed: 6.032869607s Sep 29 11:11:12.491: INFO: Pod "pod-subpath-test-downwardapi-wwjs": Phase="Running", Reason="", readiness=true. Elapsed: 8.0476759s Sep 29 11:11:14.495: INFO: Pod "pod-subpath-test-downwardapi-wwjs": Phase="Running", Reason="", readiness=true. Elapsed: 10.052005161s Sep 29 11:11:16.501: INFO: Pod "pod-subpath-test-downwardapi-wwjs": Phase="Running", Reason="", readiness=true. Elapsed: 12.057923673s Sep 29 11:11:18.505: INFO: Pod "pod-subpath-test-downwardapi-wwjs": Phase="Running", Reason="", readiness=true. Elapsed: 14.062131248s Sep 29 11:11:20.563: INFO: Pod "pod-subpath-test-downwardapi-wwjs": Phase="Running", Reason="", readiness=true. Elapsed: 16.119780839s Sep 29 11:11:22.802: INFO: Pod "pod-subpath-test-downwardapi-wwjs": Phase="Running", Reason="", readiness=true. Elapsed: 18.359517078s Sep 29 11:11:24.806: INFO: Pod "pod-subpath-test-downwardapi-wwjs": Phase="Running", Reason="", readiness=true. Elapsed: 20.362891936s Sep 29 11:11:26.809: INFO: Pod "pod-subpath-test-downwardapi-wwjs": Phase="Running", Reason="", readiness=true. Elapsed: 22.365721201s Sep 29 11:11:28.812: INFO: Pod "pod-subpath-test-downwardapi-wwjs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.369491215s STEP: Saw pod success Sep 29 11:11:28.812: INFO: Pod "pod-subpath-test-downwardapi-wwjs" satisfied condition "Succeeded or Failed" Sep 29 11:11:28.814: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-downwardapi-wwjs container test-container-subpath-downwardapi-wwjs: STEP: delete the pod Sep 29 11:11:28.857: INFO: Waiting for pod pod-subpath-test-downwardapi-wwjs to disappear Sep 29 11:11:28.869: INFO: Pod pod-subpath-test-downwardapi-wwjs no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-wwjs Sep 29 11:11:28.869: INFO: Deleting pod "pod-subpath-test-downwardapi-wwjs" in namespace "subpath-4654" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:11:28.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4654" for this suite. • [SLOW TEST:24.609 seconds] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":303,"completed":101,"skipped":1606,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:11:28.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should create and stop a working application [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating all guestbook components Sep 29 11:11:28.974: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend Sep 29 11:11:28.975: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5783' Sep 29 11:11:29.515: INFO: stderr: "" Sep 29 11:11:29.515: INFO: stdout: "service/agnhost-replica created\n" Sep 29 11:11:29.515: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend Sep 29 11:11:29.515: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5783' Sep 29 11:11:29.804: INFO: stderr: "" Sep 29 11:11:29.804: INFO: stdout: "service/agnhost-primary created\n" Sep 29 11:11:29.804: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Sep 29 11:11:29.805: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5783' Sep 29 11:11:30.088: INFO: stderr: "" Sep 29 11:11:30.088: INFO: stdout: "service/frontend created\n" Sep 29 11:11:30.088: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Sep 29 11:11:30.088: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5783' Sep 29 11:11:30.379: INFO: stderr: "" Sep 29 11:11:30.379: INFO: stdout: "deployment.apps/frontend created\n" Sep 29 11:11:30.380: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Sep 29 11:11:30.380: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5783' Sep 29 11:11:30.667: INFO: stderr: "" Sep 29 11:11:30.667: INFO: stdout: "deployment.apps/agnhost-primary created\n" Sep 29 11:11:30.667: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Sep 29 11:11:30.667: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5783' Sep 29 11:11:30.970: INFO: stderr: "" Sep 29 11:11:30.970: INFO: stdout: "deployment.apps/agnhost-replica created\n" STEP: validating guestbook app Sep 29 11:11:30.970: INFO: Waiting for all frontend pods to be Running. Sep 29 11:11:41.020: INFO: Waiting for frontend to serve content. Sep 29 11:11:41.030: INFO: Trying to add a new entry to the guestbook. Sep 29 11:11:41.040: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Sep 29 11:11:41.047: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5783' Sep 29 11:11:41.222: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 29 11:11:41.222: INFO: stdout: "service \"agnhost-replica\" force deleted\n" STEP: using delete to clean up resources Sep 29 11:11:41.223: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5783' Sep 29 11:11:41.361: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 29 11:11:41.361: INFO: stdout: "service \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Sep 29 11:11:41.361: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5783' Sep 29 11:11:41.473: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 29 11:11:41.473: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Sep 29 11:11:41.474: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5783' Sep 29 11:11:41.575: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 29 11:11:41.576: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Sep 29 11:11:41.576: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5783' Sep 29 11:11:41.743: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 29 11:11:41.743: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Sep 29 11:11:41.744: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5783' Sep 29 11:11:42.383: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 29 11:11:42.383: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:11:42.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5783" for this suite. • [SLOW TEST:13.529 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:351 should create and stop a working application [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":303,"completed":102,"skipped":1653,"failed":0} SSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:11:42.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-64308b76-8d8f-41a9-866f-499361808e8e STEP: Creating a pod to test consume secrets Sep 29 11:11:43.841: INFO: Waiting up to 5m0s for pod "pod-secrets-82781989-a8e1-47c0-98a9-9b4f8e84a041" in namespace "secrets-7593" to be "Succeeded or Failed" Sep 29 11:11:43.865: INFO: Pod "pod-secrets-82781989-a8e1-47c0-98a9-9b4f8e84a041": Phase="Pending", Reason="", readiness=false. Elapsed: 23.695271ms Sep 29 11:11:45.972: INFO: Pod "pod-secrets-82781989-a8e1-47c0-98a9-9b4f8e84a041": Phase="Pending", Reason="", readiness=false. Elapsed: 2.131285106s Sep 29 11:11:47.995: INFO: Pod "pod-secrets-82781989-a8e1-47c0-98a9-9b4f8e84a041": Phase="Pending", Reason="", readiness=false. Elapsed: 4.154215694s Sep 29 11:11:49.999: INFO: Pod "pod-secrets-82781989-a8e1-47c0-98a9-9b4f8e84a041": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.157506684s STEP: Saw pod success Sep 29 11:11:49.999: INFO: Pod "pod-secrets-82781989-a8e1-47c0-98a9-9b4f8e84a041" satisfied condition "Succeeded or Failed" Sep 29 11:11:50.001: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-82781989-a8e1-47c0-98a9-9b4f8e84a041 container secret-volume-test: STEP: delete the pod Sep 29 11:11:50.023: INFO: Waiting for pod pod-secrets-82781989-a8e1-47c0-98a9-9b4f8e84a041 to disappear Sep 29 11:11:50.042: INFO: Pod pod-secrets-82781989-a8e1-47c0-98a9-9b4f8e84a041 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:11:50.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7593" for this suite. STEP: Destroying namespace "secret-namespace-3596" for this suite. • [SLOW TEST:7.652 seconds] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":303,"completed":103,"skipped":1659,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:11:50.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:11:50.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9649" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":303,"completed":104,"skipped":1673,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:11:50.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Update Demo /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:308 [It] should create and stop a replication controller [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller Sep 29 11:11:50.279: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8462' Sep 29 11:11:50.597: INFO: stderr: "" Sep 29 11:11:50.597: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Sep 29 11:11:50.597: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8462' Sep 29 11:11:50.813: INFO: stderr: "" Sep 29 11:11:50.813: INFO: stdout: "update-demo-nautilus-49r6q update-demo-nautilus-wd7hs " Sep 29 11:11:50.813: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-49r6q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8462' Sep 29 11:11:50.929: INFO: stderr: "" Sep 29 11:11:50.929: INFO: stdout: "" Sep 29 11:11:50.929: INFO: update-demo-nautilus-49r6q is created but not running Sep 29 11:11:55.929: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8462' Sep 29 11:11:56.029: INFO: stderr: "" Sep 29 11:11:56.029: INFO: stdout: "update-demo-nautilus-49r6q update-demo-nautilus-wd7hs " Sep 29 11:11:56.029: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-49r6q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8462' Sep 29 11:11:56.130: INFO: stderr: "" Sep 29 11:11:56.130: INFO: stdout: "true" Sep 29 11:11:56.131: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-49r6q -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8462' Sep 29 11:11:56.221: INFO: stderr: "" Sep 29 11:11:56.221: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Sep 29 11:11:56.221: INFO: validating pod update-demo-nautilus-49r6q Sep 29 11:11:56.224: INFO: got data: { "image": "nautilus.jpg" } Sep 29 11:11:56.224: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Sep 29 11:11:56.224: INFO: update-demo-nautilus-49r6q is verified up and running Sep 29 11:11:56.224: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wd7hs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8462' Sep 29 11:11:56.327: INFO: stderr: "" Sep 29 11:11:56.327: INFO: stdout: "true" Sep 29 11:11:56.327: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wd7hs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8462' Sep 29 11:11:56.425: INFO: stderr: "" Sep 29 11:11:56.425: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Sep 29 11:11:56.425: INFO: validating pod update-demo-nautilus-wd7hs Sep 29 11:11:56.430: INFO: got data: { "image": "nautilus.jpg" } Sep 29 11:11:56.430: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Sep 29 11:11:56.430: INFO: update-demo-nautilus-wd7hs is verified up and running STEP: using delete to clean up resources Sep 29 11:11:56.430: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8462' Sep 29 11:11:56.540: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 29 11:11:56.540: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Sep 29 11:11:56.540: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8462' Sep 29 11:11:56.641: INFO: stderr: "No resources found in kubectl-8462 namespace.\n" Sep 29 11:11:56.641: INFO: stdout: "" Sep 29 11:11:56.641: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8462 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Sep 29 11:11:56.742: INFO: stderr: "" Sep 29 11:11:56.742: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:11:56.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8462" for this suite. • [SLOW TEST:6.531 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:306 should create and stop a replication controller [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":303,"completed":105,"skipped":1686,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:11:56.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override arguments Sep 29 11:11:57.281: INFO: Waiting up to 5m0s for pod "client-containers-1bca4471-54c2-4d45-8792-2564991bd3b8" in namespace "containers-9838" to be "Succeeded or Failed" Sep 29 11:11:57.285: INFO: Pod "client-containers-1bca4471-54c2-4d45-8792-2564991bd3b8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.763987ms Sep 29 11:11:59.289: INFO: Pod "client-containers-1bca4471-54c2-4d45-8792-2564991bd3b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007207565s Sep 29 11:12:01.293: INFO: Pod "client-containers-1bca4471-54c2-4d45-8792-2564991bd3b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011376149s STEP: Saw pod success Sep 29 11:12:01.293: INFO: Pod "client-containers-1bca4471-54c2-4d45-8792-2564991bd3b8" satisfied condition "Succeeded or Failed" Sep 29 11:12:01.295: INFO: Trying to get logs from node kali-worker2 pod client-containers-1bca4471-54c2-4d45-8792-2564991bd3b8 container test-container: STEP: delete the pod Sep 29 11:12:01.317: INFO: Waiting for pod client-containers-1bca4471-54c2-4d45-8792-2564991bd3b8 to disappear Sep 29 11:12:01.321: INFO: Pod client-containers-1bca4471-54c2-4d45-8792-2564991bd3b8 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:12:01.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9838" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":303,"completed":106,"skipped":1697,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:12:01.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:12:12.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4339" for this suite. • [SLOW TEST:11.137 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":303,"completed":107,"skipped":1713,"failed":0} SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:12:12.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 29 11:12:12.598: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1aabcb24-6050-41ab-a5fc-30602128f819" in namespace "projected-9543" to be "Succeeded or Failed" Sep 29 11:12:12.601: INFO: Pod "downwardapi-volume-1aabcb24-6050-41ab-a5fc-30602128f819": Phase="Pending", Reason="", readiness=false. Elapsed: 3.352045ms Sep 29 11:12:14.672: INFO: Pod "downwardapi-volume-1aabcb24-6050-41ab-a5fc-30602128f819": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074147574s Sep 29 11:12:16.676: INFO: Pod "downwardapi-volume-1aabcb24-6050-41ab-a5fc-30602128f819": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.078110009s STEP: Saw pod success Sep 29 11:12:16.676: INFO: Pod "downwardapi-volume-1aabcb24-6050-41ab-a5fc-30602128f819" satisfied condition "Succeeded or Failed" Sep 29 11:12:16.679: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-1aabcb24-6050-41ab-a5fc-30602128f819 container client-container: STEP: delete the pod Sep 29 11:12:16.736: INFO: Waiting for pod downwardapi-volume-1aabcb24-6050-41ab-a5fc-30602128f819 to disappear Sep 29 11:12:16.741: INFO: Pod downwardapi-volume-1aabcb24-6050-41ab-a5fc-30602128f819 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:12:16.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9543" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":303,"completed":108,"skipped":1717,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:12:16.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-3059 Sep 29 11:12:18.843: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-3059 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Sep 29 11:12:19.086: INFO: stderr: "I0929 11:12:18.987950 1358 log.go:181] (0xc000ea13f0) (0xc000e986e0) Create stream\nI0929 11:12:18.988011 1358 log.go:181] (0xc000ea13f0) (0xc000e986e0) Stream added, broadcasting: 1\nI0929 11:12:18.993307 1358 log.go:181] (0xc000ea13f0) Reply frame received for 1\nI0929 11:12:18.993334 1358 log.go:181] (0xc000ea13f0) (0xc000e98000) Create stream\nI0929 11:12:18.993344 1358 log.go:181] (0xc000ea13f0) (0xc000e98000) Stream added, broadcasting: 3\nI0929 11:12:18.994368 1358 log.go:181] (0xc000ea13f0) Reply frame received for 3\nI0929 11:12:18.994403 1358 log.go:181] (0xc000ea13f0) (0xc000c6a0a0) Create stream\nI0929 11:12:18.994413 1358 log.go:181] (0xc000ea13f0) (0xc000c6a0a0) Stream added, broadcasting: 5\nI0929 11:12:18.995305 1358 log.go:181] (0xc000ea13f0) Reply frame received for 5\nI0929 11:12:19.072083 1358 log.go:181] (0xc000ea13f0) Data frame received for 5\nI0929 11:12:19.072124 1358 log.go:181] (0xc000c6a0a0) (5) Data frame handling\nI0929 11:12:19.072153 1358 log.go:181] (0xc000c6a0a0) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0929 11:12:19.077466 1358 log.go:181] (0xc000ea13f0) Data frame received for 3\nI0929 11:12:19.077512 1358 log.go:181] (0xc000e98000) (3) Data frame handling\nI0929 11:12:19.077552 1358 log.go:181] (0xc000e98000) (3) Data frame sent\nI0929 11:12:19.077865 1358 log.go:181] (0xc000ea13f0) Data frame received for 5\nI0929 11:12:19.077907 1358 log.go:181] (0xc000c6a0a0) (5) Data frame handling\nI0929 11:12:19.078066 1358 log.go:181] (0xc000ea13f0) Data frame received for 3\nI0929 11:12:19.078093 1358 log.go:181] (0xc000e98000) (3) Data frame handling\nI0929 11:12:19.079976 1358 log.go:181] (0xc000ea13f0) Data frame received for 1\nI0929 11:12:19.080001 1358 log.go:181] (0xc000e986e0) (1) Data frame handling\nI0929 11:12:19.080013 1358 log.go:181] (0xc000e986e0) (1) Data frame sent\nI0929 11:12:19.080038 1358 log.go:181] (0xc000ea13f0) (0xc000e986e0) Stream removed, broadcasting: 1\nI0929 11:12:19.080064 1358 log.go:181] (0xc000ea13f0) Go away received\nI0929 11:12:19.080523 1358 log.go:181] (0xc000ea13f0) (0xc000e986e0) Stream removed, broadcasting: 1\nI0929 11:12:19.080547 1358 log.go:181] (0xc000ea13f0) (0xc000e98000) Stream removed, broadcasting: 3\nI0929 11:12:19.080558 1358 log.go:181] (0xc000ea13f0) (0xc000c6a0a0) Stream removed, broadcasting: 5\n" Sep 29 11:12:19.086: INFO: stdout: "iptables" Sep 29 11:12:19.086: INFO: proxyMode: iptables Sep 29 11:12:19.092: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 29 11:12:19.110: INFO: Pod kube-proxy-mode-detector still exists Sep 29 11:12:21.110: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 29 11:12:21.115: INFO: Pod kube-proxy-mode-detector still exists Sep 29 11:12:23.110: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 29 11:12:23.114: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-3059 STEP: creating replication controller affinity-clusterip-timeout in namespace services-3059 I0929 11:12:23.181668 7 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-3059, replica count: 3 I0929 11:12:26.232055 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0929 11:12:29.232290 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 29 11:12:29.239: INFO: Creating new exec pod Sep 29 11:12:34.258: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-3059 execpod-affinityhng7c -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Sep 29 11:12:34.478: INFO: stderr: "I0929 11:12:34.400189 1376 log.go:181] (0xc001001080) (0xc0006f8820) Create stream\nI0929 11:12:34.400341 1376 log.go:181] (0xc001001080) (0xc0006f8820) Stream added, broadcasting: 1\nI0929 11:12:34.405424 1376 log.go:181] (0xc001001080) Reply frame received for 1\nI0929 11:12:34.405458 1376 log.go:181] (0xc001001080) (0xc0006f8000) Create stream\nI0929 11:12:34.405466 1376 log.go:181] (0xc001001080) (0xc0006f8000) Stream added, broadcasting: 3\nI0929 11:12:34.406387 1376 log.go:181] (0xc001001080) Reply frame received for 3\nI0929 11:12:34.406422 1376 log.go:181] (0xc001001080) (0xc0005d4140) Create stream\nI0929 11:12:34.406436 1376 log.go:181] (0xc001001080) (0xc0005d4140) Stream added, broadcasting: 5\nI0929 11:12:34.407248 1376 log.go:181] (0xc001001080) Reply frame received for 5\nI0929 11:12:34.469476 1376 log.go:181] (0xc001001080) Data frame received for 5\nI0929 11:12:34.469515 1376 log.go:181] (0xc0005d4140) (5) Data frame handling\nI0929 11:12:34.469543 1376 log.go:181] (0xc0005d4140) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip-timeout 80\nI0929 11:12:34.470663 1376 log.go:181] (0xc001001080) Data frame received for 5\nI0929 11:12:34.470689 1376 log.go:181] (0xc0005d4140) (5) Data frame handling\nI0929 11:12:34.470707 1376 log.go:181] (0xc0005d4140) (5) Data frame sent\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\nI0929 11:12:34.470938 1376 log.go:181] (0xc001001080) Data frame received for 5\nI0929 11:12:34.470985 1376 log.go:181] (0xc0005d4140) (5) Data frame handling\nI0929 11:12:34.471018 1376 log.go:181] (0xc001001080) Data frame received for 3\nI0929 11:12:34.471034 1376 log.go:181] (0xc0006f8000) (3) Data frame handling\nI0929 11:12:34.472544 1376 log.go:181] (0xc001001080) Data frame received for 1\nI0929 11:12:34.472577 1376 log.go:181] (0xc0006f8820) (1) Data frame handling\nI0929 11:12:34.472604 1376 log.go:181] (0xc0006f8820) (1) Data frame sent\nI0929 11:12:34.472625 1376 log.go:181] (0xc001001080) (0xc0006f8820) Stream removed, broadcasting: 1\nI0929 11:12:34.472655 1376 log.go:181] (0xc001001080) Go away received\nI0929 11:12:34.473478 1376 log.go:181] (0xc001001080) (0xc0006f8820) Stream removed, broadcasting: 1\nI0929 11:12:34.473520 1376 log.go:181] (0xc001001080) (0xc0006f8000) Stream removed, broadcasting: 3\nI0929 11:12:34.473543 1376 log.go:181] (0xc001001080) (0xc0005d4140) Stream removed, broadcasting: 5\n" Sep 29 11:12:34.478: INFO: stdout: "" Sep 29 11:12:34.479: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-3059 execpod-affinityhng7c -- /bin/sh -x -c nc -zv -t -w 2 10.101.14.246 80' Sep 29 11:12:34.695: INFO: stderr: "I0929 11:12:34.611995 1394 log.go:181] (0xc000a1f3f0) (0xc000a16960) Create stream\nI0929 11:12:34.612055 1394 log.go:181] (0xc000a1f3f0) (0xc000a16960) Stream added, broadcasting: 1\nI0929 11:12:34.621421 1394 log.go:181] (0xc000a1f3f0) Reply frame received for 1\nI0929 11:12:34.621462 1394 log.go:181] (0xc000a1f3f0) (0xc000a16000) Create stream\nI0929 11:12:34.621471 1394 log.go:181] (0xc000a1f3f0) (0xc000a16000) Stream added, broadcasting: 3\nI0929 11:12:34.622218 1394 log.go:181] (0xc000a1f3f0) Reply frame received for 3\nI0929 11:12:34.622246 1394 log.go:181] (0xc000a1f3f0) (0xc000130a00) Create stream\nI0929 11:12:34.622254 1394 log.go:181] (0xc000a1f3f0) (0xc000130a00) Stream added, broadcasting: 5\nI0929 11:12:34.623080 1394 log.go:181] (0xc000a1f3f0) Reply frame received for 5\nI0929 11:12:34.688542 1394 log.go:181] (0xc000a1f3f0) Data frame received for 3\nI0929 11:12:34.688592 1394 log.go:181] (0xc000a1f3f0) Data frame received for 5\nI0929 11:12:34.688636 1394 log.go:181] (0xc000130a00) (5) Data frame handling\nI0929 11:12:34.688652 1394 log.go:181] (0xc000130a00) (5) Data frame sent\n+ nc -zv -t -w 2 10.101.14.246 80\nConnection to 10.101.14.246 80 port [tcp/http] succeeded!\nI0929 11:12:34.688672 1394 log.go:181] (0xc000a16000) (3) Data frame handling\nI0929 11:12:34.688814 1394 log.go:181] (0xc000a1f3f0) Data frame received for 5\nI0929 11:12:34.688939 1394 log.go:181] (0xc000130a00) (5) Data frame handling\nI0929 11:12:34.690329 1394 log.go:181] (0xc000a1f3f0) Data frame received for 1\nI0929 11:12:34.690348 1394 log.go:181] (0xc000a16960) (1) Data frame handling\nI0929 11:12:34.690362 1394 log.go:181] (0xc000a16960) (1) Data frame sent\nI0929 11:12:34.690377 1394 log.go:181] (0xc000a1f3f0) (0xc000a16960) Stream removed, broadcasting: 1\nI0929 11:12:34.690574 1394 log.go:181] (0xc000a1f3f0) Go away received\nI0929 11:12:34.690750 1394 log.go:181] (0xc000a1f3f0) (0xc000a16960) Stream removed, broadcasting: 1\nI0929 11:12:34.690769 1394 log.go:181] (0xc000a1f3f0) (0xc000a16000) Stream removed, broadcasting: 3\nI0929 11:12:34.690782 1394 log.go:181] (0xc000a1f3f0) (0xc000130a00) Stream removed, broadcasting: 5\n" Sep 29 11:12:34.695: INFO: stdout: "" Sep 29 11:12:34.695: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-3059 execpod-affinityhng7c -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.101.14.246:80/ ; done' Sep 29 11:12:35.013: INFO: stderr: "I0929 11:12:34.842500 1412 log.go:181] (0xc00003a420) (0xc000aa2000) Create stream\nI0929 11:12:34.842565 1412 log.go:181] (0xc00003a420) (0xc000aa2000) Stream added, broadcasting: 1\nI0929 11:12:34.844599 1412 log.go:181] (0xc00003a420) Reply frame received for 1\nI0929 11:12:34.844670 1412 log.go:181] (0xc00003a420) (0xc000309400) Create stream\nI0929 11:12:34.844693 1412 log.go:181] (0xc00003a420) (0xc000309400) Stream added, broadcasting: 3\nI0929 11:12:34.845864 1412 log.go:181] (0xc00003a420) Reply frame received for 3\nI0929 11:12:34.845904 1412 log.go:181] (0xc00003a420) (0xc000174000) Create stream\nI0929 11:12:34.845915 1412 log.go:181] (0xc00003a420) (0xc000174000) Stream added, broadcasting: 5\nI0929 11:12:34.846797 1412 log.go:181] (0xc00003a420) Reply frame received for 5\nI0929 11:12:34.897628 1412 log.go:181] (0xc00003a420) Data frame received for 5\nI0929 11:12:34.897656 1412 log.go:181] (0xc000174000) (5) Data frame handling\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.14.246:80/\nI0929 11:12:34.897680 1412 log.go:181] (0xc00003a420) Data frame received for 3\nI0929 11:12:34.897717 1412 log.go:181] (0xc000309400) (3) Data frame handling\nI0929 11:12:34.897734 1412 log.go:181] (0xc000309400) (3) Data frame sent\nI0929 11:12:34.897765 1412 log.go:181] (0xc000174000) (5) Data frame sent\nI0929 11:12:34.903167 1412 log.go:181] (0xc00003a420) Data frame received for 3\nI0929 11:12:34.903183 1412 log.go:181] (0xc000309400) (3) Data frame handling\nI0929 11:12:34.903191 1412 log.go:181] (0xc000309400) (3) Data frame sent\nI0929 11:12:34.903559 1412 log.go:181] (0xc00003a420) Data frame received for 3\nI0929 11:12:34.903582 1412 log.go:181] (0xc000309400) (3) Data frame handling\nI0929 11:12:34.903602 1412 log.go:181] (0xc000309400) (3) Data frame sent\nI0929 11:12:34.903621 1412 log.go:181] (0xc00003a420) Data frame received for 5\nI0929 11:12:34.903632 1412 log.go:181] (0xc000174000) (5) Data frame handling\nI0929 11:12:34.903655 1412 log.go:181] (0xc000174000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.14.246:80/\nI0929 11:12:34.910890 1412 log.go:181] (0xc00003a420) Data frame received for 3\nI0929 11:12:34.910911 1412 log.go:181] (0xc000309400) (3) Data frame handling\nI0929 11:12:34.910930 1412 log.go:181] (0xc000309400) (3) Data frame sent\nI0929 11:12:34.911576 1412 log.go:181] (0xc00003a420) Data frame received for 3\nI0929 11:12:34.911606 1412 log.go:181] (0xc00003a420) Data frame received for 5\nI0929 11:12:34.911635 1412 log.go:181] (0xc000174000) (5) Data frame handling\nI0929 11:12:34.911650 1412 log.go:181] (0xc000174000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.14.246:80/\nI0929 11:12:34.911687 1412 log.go:181] (0xc000309400) (3) Data frame handling\nI0929 11:12:34.911749 1412 log.go:181] (0xc000309400) (3) Data frame sent\nI0929 11:12:34.919206 1412 log.go:181] (0xc00003a420) Data frame received for 3\nI0929 11:12:34.919224 1412 log.go:181] (0xc000309400) (3) Data frame handling\nI0929 11:12:34.919235 1412 log.go:181] (0xc000309400) (3) Data frame sent\nI0929 11:12:34.920053 1412 log.go:181] (0xc00003a420) Data frame received for 5\nI0929 11:12:34.920093 1412 log.go:181] (0xc000174000) (5) Data frame handling\nI0929 11:12:34.920188 1412 log.go:181] (0xc000174000) (5) Data frame sent\nI0929 11:12:34.920211 1412 log.go:181] (0xc00003a420) Data frame received for 3\nI0929 11:12:34.920230 1412 log.go:181] (0xc000309400) (3) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.14.246:80/\nI0929 11:12:34.920244 1412 log.go:181] (0xc000309400) (3) Data frame sent\nI0929 11:12:34.925387 1412 log.go:181] (0xc00003a420) Data frame received for 3\nI0929 11:12:34.925412 1412 log.go:181] (0xc000309400) (3) Data frame handling\nI0929 11:12:34.925424 1412 log.go:181] (0xc000309400) (3) Data frame sent\nI0929 11:12:34.926063 1412 log.go:181] (0xc00003a420) Data frame received for 3\nI0929 11:12:34.926090 1412 log.go:181] (0xc000309400) (3) Data frame handling\nI0929 11:12:34.926106 1412 log.go:181] (0xc000309400) (3) Data frame sent\nI0929 11:12:34.926120 1412 log.go:181] (0xc00003a420) Data frame received for 5\nI0929 11:12:34.926136 1412 log.go:181] (0xc000174000) (5) Data frame handling\nI0929 11:12:34.926200 1412 log.go:181] (0xc000174000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.14.246:80/\nI0929 11:12:34.933573 1412 log.go:181] (0xc00003a420) Data frame received for 3\nI0929 11:12:34.933594 1412 log.go:181] (0xc000309400) (3) Data frame handling\nI0929 11:12:34.933608 1412 log.go:181] (0xc000309400) (3) Data frame sent\nI0929 11:12:34.934367 1412 log.go:181] (0xc00003a420) Data frame received for 3\nI0929 11:12:34.934406 1412 log.go:181] (0xc000309400) (3) Data frame handling\nI0929 11:12:34.934422 1412 log.go:181] (0xc000309400) (3) Data frame sent\nI0929 11:12:34.934446 1412 log.go:181] (0xc00003a420) Data frame received for 5\nI0929 11:12:34.934457 1412 log.go:181] (0xc000174000) (5) Data frame handling\nI0929 11:12:34.934468 1412 log.go:181] (0xc000174000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.14.246:80/\nI0929 11:12:34.937942 1412 log.go:181] (0xc00003a420) Data frame received for 3\nI0929 11:12:34.937976 1412 log.go:181] (0xc000309400) (3) Data frame handling\nI0929 11:12:34.938006 1412 log.go:181] (0xc000309400) (3) Data frame sent\nI0929 11:12:34.938507 1412 log.go:181] (0xc00003a420) Data frame received for 5\nI0929 11:12:34.938530 1412 log.go:181] (0xc000174000) (5) Data frame handling\nI0929 11:12:34.938548 1412 log.go:181] (0xc000174000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.14.246:80/\nI0929 11:12:34.938623 1412 log.go:181] (0xc00003a420) Data frame received for 3\nI0929 11:12:34.938643 1412 log.go:181] (0xc000309400) (3) Data frame handling\nI0929 11:12:34.938660 1412 log.go:181] (0xc000309400) (3) Data frame sent\nI0929 11:12:34.944124 1412 log.go:181] (0xc00003a420) Data frame received for 3\nI0929 11:12:34.944139 1412 log.go:181] (0xc000309400) (3) Data frame handling\nI0929 11:12:34.944148 1412 log.go:181] (0xc000309400) (3) Data frame sent\nI0929 11:12:34.945299 1412 log.go:181] (0xc00003a420) Data frame received for 5\nI0929 11:12:34.945331 1412 log.go:181] (0xc000174000) (5) Data frame handling\nI0929 11:12:34.945346 1412 log.go:181] (0xc000174000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.14.246:80/\nI0929 11:12:34.945365 1412 log.go:181] (0xc00003a420) Data frame received for 3\nI0929 11:12:34.945376 1412 log.go:181] (0xc000309400) (3) Data frame handling\nI0929 11:12:34.945388 1412 log.go:181] (0xc000309400) (3) Data frame sent\nI0929 11:12:34.952353 1412 log.go:181] (0xc00003a420) Data frame received for 3\nI0929 11:12:34.952385 1412 log.go:181] (0xc000309400) (3) Data frame handling\nI0929 11:12:34.952414 1412 log.go:181] (0xc000309400) (3) Data frame sent\nI0929 11:12:34.953267 1412 log.go:181] (0xc00003a420) Data frame received for 5\nI0929 11:12:34.953297 1412 log.go:181] (0xc000174000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.14.246:80/\nI0929 11:12:34.953317 1412 log.go:181] (0xc00003a420) Data frame received for 3\nI0929 11:12:34.953343 1412 log.go:181] (0xc000309400) (3) Data frame handling\nI0929 11:12:34.953357 1412 log.go:181] (0xc000309400) (3) Data frame sent\nI0929 11:12:34.953377 1412 log.go:181] (0xc000174000) (5) Data frame sent\nI0929 11:12:34.958070 1412 log.go:181] (0xc00003a420) Data frame received for 3\nI0929 11:12:34.958091 1412 log.go:181] (0xc000309400) (3) Data frame handling\nI0929 11:12:34.958110 1412 log.go:181] (0xc000309400) (3) Data frame sent\nI0929 11:12:34.958959 1412 log.go:181] (0xc00003a420) Data frame received for 3\nI0929 11:12:34.958970 1412 log.go:181] (0xc000309400) (3) Data frame handling\nI0929 11:12:34.958976 1412 log.go:181] (0xc000309400) (3) Data frame sent\nI0929 11:12:34.959087 1412 log.go:181] (0xc00003a420) Data frame received for 5\nI0929 11:12:34.959121 1412 log.go:181] (0xc000174000) (5) Data frame handling\nI0929 11:12:34.959140 1412 log.go:181] (0xc000174000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.14.246:80/\nI0929 11:12:34.966325 1412 log.go:181] (0xc00003a420) Data frame received for 3\nI0929 11:12:34.966336 1412 log.go:181] (0xc000309400) (3) Data frame handling\nI0929 11:12:34.966342 1412 log.go:181] (0xc000309400) (3) Data frame sent\nI0929 11:12:34.966731 1412 log.go:181] (0xc00003a420) Data frame received for 5\nI0929 11:12:34.966749 1412 log.go:181] (0xc000174000) (5) Data frame handling\nI0929 11:12:34.966762 1412 log.go:181] (0xc000174000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.14.246:80/\nI0929 11:12:34.967324 1412 log.go:181] (0xc00003a420) Data frame received for 3\nI0929 11:12:34.967342 1412 log.go:181] (0xc000309400) (3) Data frame handling\nI0929 11:12:34.967355 1412 log.go:181] (0xc000309400) (3) Data frame sent\nI0929 11:12:34.973730 1412 log.go:181] (0xc00003a420) Data frame received for 3\nI0929 11:12:34.973754 1412 log.go:181] (0xc000309400) (3) Data frame handling\nI0929 11:12:34.973774 1412 log.go:181] (0xc000309400) (3) Data frame sent\nI0929 11:12:34.974390 1412 log.go:181] (0xc00003a420) Data frame received for 3\nI0929 11:12:34.974419 1412 log.go:181] (0xc000309400) (3) Data frame handling\nI0929 11:12:34.974432 1412 log.go:181] (0xc000309400) (3) Data frame sent\nI0929 11:12:34.974451 1412 log.go:181] (0xc00003a420) Data frame received for 5\nI0929 11:12:34.974464 1412 log.go:181] (0xc000174000) (5) Data frame handling\nI0929 11:12:34.974476 1412 log.go:181] (0xc000174000) (5) Data frame sent\nI0929 11:12:34.974489 1412 log.go:181] (0xc00003a420) Data frame received for 5\nI0929 11:12:34.974505 1412 log.go:181] (0xc000174000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.14.246:80/\nI0929 11:12:34.974546 1412 log.go:181] (0xc000174000) (5) Data frame sent\nI0929 11:12:34.979390 1412 log.go:181] (0xc00003a420) Data frame received for 3\nI0929 11:12:34.979413 1412 log.go:181] (0xc000309400) (3) Data frame handling\nI0929 11:12:34.979434 1412 log.go:181] (0xc000309400) (3) Data frame sent\nI0929 11:12:34.980309 1412 log.go:181] (0xc00003a420) Data frame received for 3\nI0929 11:12:34.980337 1412 log.go:181] (0xc000309400) (3) Data frame handling\nI0929 11:12:34.980345 1412 log.go:181] (0xc000309400) (3) Data frame sent\nI0929 11:12:34.980365 1412 log.go:181] (0xc00003a420) Data frame received for 5\nI0929 11:12:34.980392 1412 log.go:181] (0xc000174000) (5) Data frame handling\nI0929 11:12:34.980412 1412 log.go:181] (0xc000174000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.14.246:80/\nI0929 11:12:34.984615 1412 log.go:181] (0xc00003a420) Data frame received for 3\nI0929 11:12:34.984666 1412 log.go:181] (0xc000309400) (3) Data frame handling\nI0929 11:12:34.984690 1412 log.go:181] (0xc000309400) (3) Data frame sent\nI0929 11:12:34.985518 1412 log.go:181] (0xc00003a420) Data frame received for 3\nI0929 11:12:34.985573 1412 log.go:181] (0xc000309400) (3) Data frame handling\nI0929 11:12:34.985594 1412 log.go:181] (0xc000309400) (3) Data frame sent\nI0929 11:12:34.985623 1412 log.go:181] (0xc00003a420) Data frame received for 5\nI0929 11:12:34.985640 1412 log.go:181] (0xc000174000) (5) Data frame handling\nI0929 11:12:34.985675 1412 log.go:181] (0xc000174000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.14.246:80/\nI0929 11:12:34.991726 1412 log.go:181] (0xc00003a420) Data frame received for 3\nI0929 11:12:34.991758 1412 log.go:181] (0xc000309400) (3) Data frame handling\nI0929 11:12:34.991803 1412 log.go:181] (0xc000309400) (3) Data frame sent\nI0929 11:12:34.992768 1412 log.go:181] (0xc00003a420) Data frame received for 5\nI0929 11:12:34.992805 1412 log.go:181] (0xc000174000) (5) Data frame handling\nI0929 11:12:34.992820 1412 log.go:181] (0xc000174000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.14.246:80/\nI0929 11:12:34.992960 1412 log.go:181] (0xc00003a420) Data frame received for 3\nI0929 11:12:34.992994 1412 log.go:181] (0xc000309400) (3) Data frame handling\nI0929 11:12:34.993018 1412 log.go:181] (0xc000309400) (3) Data frame sent\nI0929 11:12:34.997294 1412 log.go:181] (0xc00003a420) Data frame received for 3\nI0929 11:12:34.997317 1412 log.go:181] (0xc000309400) (3) Data frame handling\nI0929 11:12:34.997335 1412 log.go:181] (0xc000309400) (3) Data frame sent\nI0929 11:12:34.998201 1412 log.go:181] (0xc00003a420) Data frame received for 3\nI0929 11:12:34.998223 1412 log.go:181] (0xc000309400) (3) Data frame handling\nI0929 11:12:34.998235 1412 log.go:181] (0xc000309400) (3) Data frame sent\nI0929 11:12:34.998247 1412 log.go:181] (0xc00003a420) Data frame received for 5\nI0929 11:12:34.998263 1412 log.go:181] (0xc000174000) (5) Data frame handling\nI0929 11:12:34.998288 1412 log.go:181] (0xc000174000) (5) Data frame sent\nI0929 11:12:34.998309 1412 log.go:181] (0xc00003a420) Data frame received for 5\nI0929 11:12:34.998330 1412 log.go:181] (0xc000174000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.14.246:80/\nI0929 11:12:34.998368 1412 log.go:181] (0xc000174000) (5) Data frame sent\nI0929 11:12:35.004639 1412 log.go:181] (0xc00003a420) Data frame received for 3\nI0929 11:12:35.004671 1412 log.go:181] (0xc000309400) (3) Data frame handling\nI0929 11:12:35.004689 1412 log.go:181] (0xc000309400) (3) Data frame sent\nI0929 11:12:35.005645 1412 log.go:181] (0xc00003a420) Data frame received for 5\nI0929 11:12:35.005675 1412 log.go:181] (0xc000174000) (5) Data frame handling\nI0929 11:12:35.005786 1412 log.go:181] (0xc00003a420) Data frame received for 3\nI0929 11:12:35.005828 1412 log.go:181] (0xc000309400) (3) Data frame handling\nI0929 11:12:35.007956 1412 log.go:181] (0xc00003a420) Data frame received for 1\nI0929 11:12:35.008059 1412 log.go:181] (0xc000aa2000) (1) Data frame handling\nI0929 11:12:35.008101 1412 log.go:181] (0xc000aa2000) (1) Data frame sent\nI0929 11:12:35.008123 1412 log.go:181] (0xc00003a420) (0xc000aa2000) Stream removed, broadcasting: 1\nI0929 11:12:35.008141 1412 log.go:181] (0xc00003a420) Go away received\nI0929 11:12:35.008617 1412 log.go:181] (0xc00003a420) (0xc000aa2000) Stream removed, broadcasting: 1\nI0929 11:12:35.008641 1412 log.go:181] (0xc00003a420) (0xc000309400) Stream removed, broadcasting: 3\nI0929 11:12:35.008660 1412 log.go:181] (0xc00003a420) (0xc000174000) Stream removed, broadcasting: 5\n" Sep 29 11:12:35.014: INFO: stdout: "\naffinity-clusterip-timeout-n47cf\naffinity-clusterip-timeout-n47cf\naffinity-clusterip-timeout-n47cf\naffinity-clusterip-timeout-n47cf\naffinity-clusterip-timeout-n47cf\naffinity-clusterip-timeout-n47cf\naffinity-clusterip-timeout-n47cf\naffinity-clusterip-timeout-n47cf\naffinity-clusterip-timeout-n47cf\naffinity-clusterip-timeout-n47cf\naffinity-clusterip-timeout-n47cf\naffinity-clusterip-timeout-n47cf\naffinity-clusterip-timeout-n47cf\naffinity-clusterip-timeout-n47cf\naffinity-clusterip-timeout-n47cf\naffinity-clusterip-timeout-n47cf" Sep 29 11:12:35.014: INFO: Received response from host: affinity-clusterip-timeout-n47cf Sep 29 11:12:35.014: INFO: Received response from host: affinity-clusterip-timeout-n47cf Sep 29 11:12:35.014: INFO: Received response from host: affinity-clusterip-timeout-n47cf Sep 29 11:12:35.014: INFO: Received response from host: affinity-clusterip-timeout-n47cf Sep 29 11:12:35.014: INFO: Received response from host: affinity-clusterip-timeout-n47cf Sep 29 11:12:35.014: INFO: Received response from host: affinity-clusterip-timeout-n47cf Sep 29 11:12:35.014: INFO: Received response from host: affinity-clusterip-timeout-n47cf Sep 29 11:12:35.014: INFO: Received response from host: affinity-clusterip-timeout-n47cf Sep 29 11:12:35.014: INFO: Received response from host: affinity-clusterip-timeout-n47cf Sep 29 11:12:35.014: INFO: Received response from host: affinity-clusterip-timeout-n47cf Sep 29 11:12:35.014: INFO: Received response from host: affinity-clusterip-timeout-n47cf Sep 29 11:12:35.014: INFO: Received response from host: affinity-clusterip-timeout-n47cf Sep 29 11:12:35.014: INFO: Received response from host: affinity-clusterip-timeout-n47cf Sep 29 11:12:35.014: INFO: Received response from host: affinity-clusterip-timeout-n47cf Sep 29 11:12:35.014: INFO: Received response from host: affinity-clusterip-timeout-n47cf Sep 29 11:12:35.014: INFO: Received response from host: affinity-clusterip-timeout-n47cf Sep 29 11:12:35.015: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-3059 execpod-affinityhng7c -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.101.14.246:80/' Sep 29 11:12:35.218: INFO: stderr: "I0929 11:12:35.145463 1430 log.go:181] (0xc0008a7810) (0xc00089ebe0) Create stream\nI0929 11:12:35.145519 1430 log.go:181] (0xc0008a7810) (0xc00089ebe0) Stream added, broadcasting: 1\nI0929 11:12:35.150895 1430 log.go:181] (0xc0008a7810) Reply frame received for 1\nI0929 11:12:35.150932 1430 log.go:181] (0xc0008a7810) (0xc000308780) Create stream\nI0929 11:12:35.150942 1430 log.go:181] (0xc0008a7810) (0xc000308780) Stream added, broadcasting: 3\nI0929 11:12:35.151822 1430 log.go:181] (0xc0008a7810) Reply frame received for 3\nI0929 11:12:35.151875 1430 log.go:181] (0xc0008a7810) (0xc00089e000) Create stream\nI0929 11:12:35.151907 1430 log.go:181] (0xc0008a7810) (0xc00089e000) Stream added, broadcasting: 5\nI0929 11:12:35.153006 1430 log.go:181] (0xc0008a7810) Reply frame received for 5\nI0929 11:12:35.206179 1430 log.go:181] (0xc0008a7810) Data frame received for 5\nI0929 11:12:35.206210 1430 log.go:181] (0xc00089e000) (5) Data frame handling\nI0929 11:12:35.206233 1430 log.go:181] (0xc00089e000) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.101.14.246:80/\nI0929 11:12:35.211264 1430 log.go:181] (0xc0008a7810) Data frame received for 3\nI0929 11:12:35.211289 1430 log.go:181] (0xc000308780) (3) Data frame handling\nI0929 11:12:35.211305 1430 log.go:181] (0xc000308780) (3) Data frame sent\nI0929 11:12:35.212113 1430 log.go:181] (0xc0008a7810) Data frame received for 3\nI0929 11:12:35.212134 1430 log.go:181] (0xc000308780) (3) Data frame handling\nI0929 11:12:35.212155 1430 log.go:181] (0xc0008a7810) Data frame received for 5\nI0929 11:12:35.212162 1430 log.go:181] (0xc00089e000) (5) Data frame handling\nI0929 11:12:35.214524 1430 log.go:181] (0xc0008a7810) Data frame received for 1\nI0929 11:12:35.214554 1430 log.go:181] (0xc00089ebe0) (1) Data frame handling\nI0929 11:12:35.214564 1430 log.go:181] (0xc00089ebe0) (1) Data frame sent\nI0929 11:12:35.214576 1430 log.go:181] (0xc0008a7810) (0xc00089ebe0) Stream removed, broadcasting: 1\nI0929 11:12:35.214586 1430 log.go:181] (0xc0008a7810) Go away received\nI0929 11:12:35.214942 1430 log.go:181] (0xc0008a7810) (0xc00089ebe0) Stream removed, broadcasting: 1\nI0929 11:12:35.214956 1430 log.go:181] (0xc0008a7810) (0xc000308780) Stream removed, broadcasting: 3\nI0929 11:12:35.214963 1430 log.go:181] (0xc0008a7810) (0xc00089e000) Stream removed, broadcasting: 5\n" Sep 29 11:12:35.218: INFO: stdout: "affinity-clusterip-timeout-n47cf" Sep 29 11:12:50.219: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-3059 execpod-affinityhng7c -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.101.14.246:80/' Sep 29 11:12:50.434: INFO: stderr: "I0929 11:12:50.349125 1448 log.go:181] (0xc000e11340) (0xc0005c4960) Create stream\nI0929 11:12:50.349173 1448 log.go:181] (0xc000e11340) (0xc0005c4960) Stream added, broadcasting: 1\nI0929 11:12:50.351901 1448 log.go:181] (0xc000e11340) Reply frame received for 1\nI0929 11:12:50.351951 1448 log.go:181] (0xc000e11340) (0xc000d16280) Create stream\nI0929 11:12:50.351969 1448 log.go:181] (0xc000e11340) (0xc000d16280) Stream added, broadcasting: 3\nI0929 11:12:50.353083 1448 log.go:181] (0xc000e11340) Reply frame received for 3\nI0929 11:12:50.353113 1448 log.go:181] (0xc000e11340) (0xc000d16320) Create stream\nI0929 11:12:50.353123 1448 log.go:181] (0xc000e11340) (0xc000d16320) Stream added, broadcasting: 5\nI0929 11:12:50.354009 1448 log.go:181] (0xc000e11340) Reply frame received for 5\nI0929 11:12:50.421711 1448 log.go:181] (0xc000e11340) Data frame received for 5\nI0929 11:12:50.421733 1448 log.go:181] (0xc000d16320) (5) Data frame handling\nI0929 11:12:50.421744 1448 log.go:181] (0xc000d16320) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.101.14.246:80/\nI0929 11:12:50.427925 1448 log.go:181] (0xc000e11340) Data frame received for 3\nI0929 11:12:50.427955 1448 log.go:181] (0xc000d16280) (3) Data frame handling\nI0929 11:12:50.427975 1448 log.go:181] (0xc000d16280) (3) Data frame sent\nI0929 11:12:50.428731 1448 log.go:181] (0xc000e11340) Data frame received for 3\nI0929 11:12:50.428753 1448 log.go:181] (0xc000d16280) (3) Data frame handling\nI0929 11:12:50.428788 1448 log.go:181] (0xc000e11340) Data frame received for 5\nI0929 11:12:50.428813 1448 log.go:181] (0xc000d16320) (5) Data frame handling\nI0929 11:12:50.430455 1448 log.go:181] (0xc000e11340) Data frame received for 1\nI0929 11:12:50.430468 1448 log.go:181] (0xc0005c4960) (1) Data frame handling\nI0929 11:12:50.430482 1448 log.go:181] (0xc0005c4960) (1) Data frame sent\nI0929 11:12:50.430513 1448 log.go:181] (0xc000e11340) (0xc0005c4960) Stream removed, broadcasting: 1\nI0929 11:12:50.430549 1448 log.go:181] (0xc000e11340) Go away received\nI0929 11:12:50.430798 1448 log.go:181] (0xc000e11340) (0xc0005c4960) Stream removed, broadcasting: 1\nI0929 11:12:50.430811 1448 log.go:181] (0xc000e11340) (0xc000d16280) Stream removed, broadcasting: 3\nI0929 11:12:50.430816 1448 log.go:181] (0xc000e11340) (0xc000d16320) Stream removed, broadcasting: 5\n" Sep 29 11:12:50.434: INFO: stdout: "affinity-clusterip-timeout-k56b4" Sep 29 11:12:50.434: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-3059, will wait for the garbage collector to delete the pods Sep 29 11:12:50.528: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 6.037348ms Sep 29 11:12:51.028: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 500.248826ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:12:58.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3059" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:42.048 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":303,"completed":109,"skipped":1740,"failed":0} SSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:12:58.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Sep 29 11:12:58.883: INFO: Created pod &Pod{ObjectMeta:{dns-6176 dns-6176 /api/v1/namespaces/dns-6176/pods/dns-6176 b90c0cdd-fa91-48c7-8352-33db823922be 1604940 0 2020-09-29 11:12:58 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2020-09-29 11:12:58 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8cn62,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8cn62,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8cn62,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 29 11:12:58.886: INFO: The status of Pod dns-6176 is Pending, waiting for it to be Running (with Ready = true) Sep 29 11:13:00.913: INFO: The status of Pod dns-6176 is Pending, waiting for it to be Running (with Ready = true) Sep 29 11:13:02.890: INFO: The status of Pod dns-6176 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Sep 29 11:13:02.890: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-6176 PodName:dns-6176 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 29 11:13:02.890: INFO: >>> kubeConfig: /root/.kube/config I0929 11:13:02.921971 7 log.go:181] (0xc0065f60b0) (0xc000f42000) Create stream I0929 11:13:02.922017 7 log.go:181] (0xc0065f60b0) (0xc000f42000) Stream added, broadcasting: 1 I0929 11:13:02.923998 7 log.go:181] (0xc0065f60b0) Reply frame received for 1 I0929 11:13:02.924031 7 log.go:181] (0xc0065f60b0) (0xc006848640) Create stream I0929 11:13:02.924045 7 log.go:181] (0xc0065f60b0) (0xc006848640) Stream added, broadcasting: 3 I0929 11:13:02.925016 7 log.go:181] (0xc0065f60b0) Reply frame received for 3 I0929 11:13:02.925076 7 log.go:181] (0xc0065f60b0) (0xc0037526e0) Create stream I0929 11:13:02.925120 7 log.go:181] (0xc0065f60b0) (0xc0037526e0) Stream added, broadcasting: 5 I0929 11:13:02.926258 7 log.go:181] (0xc0065f60b0) Reply frame received for 5 I0929 11:13:03.017272 7 log.go:181] (0xc0065f60b0) Data frame received for 3 I0929 11:13:03.017308 7 log.go:181] (0xc006848640) (3) Data frame handling I0929 11:13:03.017334 7 log.go:181] (0xc006848640) (3) Data frame sent I0929 11:13:03.017791 7 log.go:181] (0xc0065f60b0) Data frame received for 5 I0929 11:13:03.017828 7 log.go:181] (0xc0037526e0) (5) Data frame handling I0929 11:13:03.017860 7 log.go:181] (0xc0065f60b0) Data frame received for 3 I0929 11:13:03.017876 7 log.go:181] (0xc006848640) (3) Data frame handling I0929 11:13:03.019745 7 log.go:181] (0xc0065f60b0) Data frame received for 1 I0929 11:13:03.019790 7 log.go:181] (0xc000f42000) (1) Data frame handling I0929 11:13:03.019828 7 log.go:181] (0xc000f42000) (1) Data frame sent I0929 11:13:03.019858 7 log.go:181] (0xc0065f60b0) (0xc000f42000) Stream removed, broadcasting: 1 I0929 11:13:03.019895 7 log.go:181] (0xc0065f60b0) Go away received I0929 11:13:03.020376 7 log.go:181] (0xc0065f60b0) (0xc000f42000) Stream removed, broadcasting: 1 I0929 11:13:03.020399 7 log.go:181] (0xc0065f60b0) (0xc006848640) Stream removed, broadcasting: 3 I0929 11:13:03.020410 7 log.go:181] (0xc0065f60b0) (0xc0037526e0) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Sep 29 11:13:03.020: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-6176 PodName:dns-6176 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 29 11:13:03.020: INFO: >>> kubeConfig: /root/.kube/config I0929 11:13:03.053602 7 log.go:181] (0xc0064fc370) (0xc006c8b040) Create stream I0929 11:13:03.053622 7 log.go:181] (0xc0064fc370) (0xc006c8b040) Stream added, broadcasting: 1 I0929 11:13:03.055441 7 log.go:181] (0xc0064fc370) Reply frame received for 1 I0929 11:13:03.055475 7 log.go:181] (0xc0064fc370) (0xc0068486e0) Create stream I0929 11:13:03.055485 7 log.go:181] (0xc0064fc370) (0xc0068486e0) Stream added, broadcasting: 3 I0929 11:13:03.056472 7 log.go:181] (0xc0064fc370) Reply frame received for 3 I0929 11:13:03.056514 7 log.go:181] (0xc0064fc370) (0xc003752780) Create stream I0929 11:13:03.056530 7 log.go:181] (0xc0064fc370) (0xc003752780) Stream added, broadcasting: 5 I0929 11:13:03.057466 7 log.go:181] (0xc0064fc370) Reply frame received for 5 I0929 11:13:03.137648 7 log.go:181] (0xc0064fc370) Data frame received for 3 I0929 11:13:03.137675 7 log.go:181] (0xc0068486e0) (3) Data frame handling I0929 11:13:03.137692 7 log.go:181] (0xc0068486e0) (3) Data frame sent I0929 11:13:03.139116 7 log.go:181] (0xc0064fc370) Data frame received for 3 I0929 11:13:03.139151 7 log.go:181] (0xc0068486e0) (3) Data frame handling I0929 11:13:03.139424 7 log.go:181] (0xc0064fc370) Data frame received for 5 I0929 11:13:03.139443 7 log.go:181] (0xc003752780) (5) Data frame handling I0929 11:13:03.141008 7 log.go:181] (0xc0064fc370) Data frame received for 1 I0929 11:13:03.141025 7 log.go:181] (0xc006c8b040) (1) Data frame handling I0929 11:13:03.141039 7 log.go:181] (0xc006c8b040) (1) Data frame sent I0929 11:13:03.141328 7 log.go:181] (0xc0064fc370) (0xc006c8b040) Stream removed, broadcasting: 1 I0929 11:13:03.141366 7 log.go:181] (0xc0064fc370) Go away received I0929 11:13:03.141621 7 log.go:181] (0xc0064fc370) (0xc006c8b040) Stream removed, broadcasting: 1 I0929 11:13:03.141652 7 log.go:181] (0xc0064fc370) (0xc0068486e0) Stream removed, broadcasting: 3 I0929 11:13:03.141674 7 log.go:181] (0xc0064fc370) (0xc003752780) Stream removed, broadcasting: 5 Sep 29 11:13:03.141: INFO: Deleting pod dns-6176... [AfterEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:13:03.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6176" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":303,"completed":110,"skipped":1744,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:13:03.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Sep 29 11:13:07.829: INFO: Successfully updated pod "annotationupdate438d265b-71d6-42a4-91ab-4c09ade1b78a" [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:13:09.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5362" for this suite. • [SLOW TEST:6.658 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":303,"completed":111,"skipped":1769,"failed":0} SSSS ------------------------------ [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:13:09.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath Sep 29 11:13:13.929: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-4126 PodName:var-expansion-1429be4b-912c-45ee-8e76-5f0408a8478b ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 29 11:13:13.929: INFO: >>> kubeConfig: /root/.kube/config I0929 11:13:13.969797 7 log.go:181] (0xc004904420) (0xc006849400) Create stream I0929 11:13:13.969837 7 log.go:181] (0xc004904420) (0xc006849400) Stream added, broadcasting: 1 I0929 11:13:13.971538 7 log.go:181] (0xc004904420) Reply frame received for 1 I0929 11:13:13.971580 7 log.go:181] (0xc004904420) (0xc0068494a0) Create stream I0929 11:13:13.971595 7 log.go:181] (0xc004904420) (0xc0068494a0) Stream added, broadcasting: 3 I0929 11:13:13.972643 7 log.go:181] (0xc004904420) Reply frame received for 3 I0929 11:13:13.972677 7 log.go:181] (0xc004904420) (0xc000e6e140) Create stream I0929 11:13:13.972693 7 log.go:181] (0xc004904420) (0xc000e6e140) Stream added, broadcasting: 5 I0929 11:13:13.973601 7 log.go:181] (0xc004904420) Reply frame received for 5 I0929 11:13:14.064930 7 log.go:181] (0xc004904420) Data frame received for 5 I0929 11:13:14.065037 7 log.go:181] (0xc000e6e140) (5) Data frame handling I0929 11:13:14.065071 7 log.go:181] (0xc004904420) Data frame received for 3 I0929 11:13:14.065091 7 log.go:181] (0xc0068494a0) (3) Data frame handling I0929 11:13:14.066452 7 log.go:181] (0xc004904420) Data frame received for 1 I0929 11:13:14.066505 7 log.go:181] (0xc006849400) (1) Data frame handling I0929 11:13:14.066540 7 log.go:181] (0xc006849400) (1) Data frame sent I0929 11:13:14.066561 7 log.go:181] (0xc004904420) (0xc006849400) Stream removed, broadcasting: 1 I0929 11:13:14.066586 7 log.go:181] (0xc004904420) Go away received I0929 11:13:14.066708 7 log.go:181] (0xc004904420) (0xc006849400) Stream removed, broadcasting: 1 I0929 11:13:14.066743 7 log.go:181] (0xc004904420) (0xc0068494a0) Stream removed, broadcasting: 3 I0929 11:13:14.066765 7 log.go:181] (0xc004904420) (0xc000e6e140) Stream removed, broadcasting: 5 STEP: test for file in mounted path Sep 29 11:13:14.070: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-4126 PodName:var-expansion-1429be4b-912c-45ee-8e76-5f0408a8478b ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 29 11:13:14.070: INFO: >>> kubeConfig: /root/.kube/config I0929 11:13:14.103285 7 log.go:181] (0xc00366c6e0) (0xc00375fc20) Create stream I0929 11:13:14.103334 7 log.go:181] (0xc00366c6e0) (0xc00375fc20) Stream added, broadcasting: 1 I0929 11:13:14.105158 7 log.go:181] (0xc00366c6e0) Reply frame received for 1 I0929 11:13:14.105194 7 log.go:181] (0xc00366c6e0) (0xc000f42c80) Create stream I0929 11:13:14.105214 7 log.go:181] (0xc00366c6e0) (0xc000f42c80) Stream added, broadcasting: 3 I0929 11:13:14.106048 7 log.go:181] (0xc00366c6e0) Reply frame received for 3 I0929 11:13:14.106083 7 log.go:181] (0xc00366c6e0) (0xc00375fcc0) Create stream I0929 11:13:14.106104 7 log.go:181] (0xc00366c6e0) (0xc00375fcc0) Stream added, broadcasting: 5 I0929 11:13:14.106862 7 log.go:181] (0xc00366c6e0) Reply frame received for 5 I0929 11:13:14.170259 7 log.go:181] (0xc00366c6e0) Data frame received for 5 I0929 11:13:14.170291 7 log.go:181] (0xc00375fcc0) (5) Data frame handling I0929 11:13:14.170330 7 log.go:181] (0xc00366c6e0) Data frame received for 3 I0929 11:13:14.170371 7 log.go:181] (0xc000f42c80) (3) Data frame handling I0929 11:13:14.171455 7 log.go:181] (0xc00366c6e0) Data frame received for 1 I0929 11:13:14.171472 7 log.go:181] (0xc00375fc20) (1) Data frame handling I0929 11:13:14.171490 7 log.go:181] (0xc00375fc20) (1) Data frame sent I0929 11:13:14.171506 7 log.go:181] (0xc00366c6e0) (0xc00375fc20) Stream removed, broadcasting: 1 I0929 11:13:14.171524 7 log.go:181] (0xc00366c6e0) Go away received I0929 11:13:14.171627 7 log.go:181] (0xc00366c6e0) (0xc00375fc20) Stream removed, broadcasting: 1 I0929 11:13:14.171648 7 log.go:181] (0xc00366c6e0) (0xc000f42c80) Stream removed, broadcasting: 3 I0929 11:13:14.171658 7 log.go:181] (0xc00366c6e0) (0xc00375fcc0) Stream removed, broadcasting: 5 STEP: updating the annotation value Sep 29 11:13:14.684: INFO: Successfully updated pod "var-expansion-1429be4b-912c-45ee-8e76-5f0408a8478b" STEP: waiting for annotated pod running STEP: deleting the pod gracefully Sep 29 11:13:14.706: INFO: Deleting pod "var-expansion-1429be4b-912c-45ee-8e76-5f0408a8478b" in namespace "var-expansion-4126" Sep 29 11:13:14.710: INFO: Wait up to 5m0s for pod "var-expansion-1429be4b-912c-45ee-8e76-5f0408a8478b" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:13:48.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4126" for this suite. • [SLOW TEST:38.908 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":303,"completed":112,"skipped":1773,"failed":0} [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:13:48.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 29 11:13:48.826: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9531d400-c2f5-4593-858b-7129429e4b2b" in namespace "downward-api-4953" to be "Succeeded or Failed" Sep 29 11:13:48.828: INFO: Pod "downwardapi-volume-9531d400-c2f5-4593-858b-7129429e4b2b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.682555ms Sep 29 11:13:50.831: INFO: Pod "downwardapi-volume-9531d400-c2f5-4593-858b-7129429e4b2b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005621738s Sep 29 11:13:52.835: INFO: Pod "downwardapi-volume-9531d400-c2f5-4593-858b-7129429e4b2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009590456s STEP: Saw pod success Sep 29 11:13:52.835: INFO: Pod "downwardapi-volume-9531d400-c2f5-4593-858b-7129429e4b2b" satisfied condition "Succeeded or Failed" Sep 29 11:13:52.839: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-9531d400-c2f5-4593-858b-7129429e4b2b container client-container: STEP: delete the pod Sep 29 11:13:52.869: INFO: Waiting for pod downwardapi-volume-9531d400-c2f5-4593-858b-7129429e4b2b to disappear Sep 29 11:13:52.896: INFO: Pod downwardapi-volume-9531d400-c2f5-4593-858b-7129429e4b2b no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:13:52.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4953" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":113,"skipped":1773,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:13:52.905: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 29 11:13:53.505: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 29 11:13:55.516: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736974833, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736974833, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736974833, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736974833, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 29 11:13:58.558: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 29 11:13:58.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:13:59.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5750" for this suite. STEP: Destroying namespace "webhook-5750-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.022 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":303,"completed":114,"skipped":1776,"failed":0} SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:13:59.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-96c552a9-b82e-42af-8f76-712448dad1bb STEP: Creating a pod to test consume secrets Sep 29 11:14:00.069: INFO: Waiting up to 5m0s for pod "pod-secrets-92e0ef25-3ece-4b4c-b347-7414b31adf1a" in namespace "secrets-8325" to be "Succeeded or Failed" Sep 29 11:14:00.075: INFO: Pod "pod-secrets-92e0ef25-3ece-4b4c-b347-7414b31adf1a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.623816ms Sep 29 11:14:02.190: INFO: Pod "pod-secrets-92e0ef25-3ece-4b4c-b347-7414b31adf1a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120454347s Sep 29 11:14:04.194: INFO: Pod "pod-secrets-92e0ef25-3ece-4b4c-b347-7414b31adf1a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.125061291s STEP: Saw pod success Sep 29 11:14:04.194: INFO: Pod "pod-secrets-92e0ef25-3ece-4b4c-b347-7414b31adf1a" satisfied condition "Succeeded or Failed" Sep 29 11:14:04.197: INFO: Trying to get logs from node kali-worker pod pod-secrets-92e0ef25-3ece-4b4c-b347-7414b31adf1a container secret-volume-test: STEP: delete the pod Sep 29 11:14:04.270: INFO: Waiting for pod pod-secrets-92e0ef25-3ece-4b4c-b347-7414b31adf1a to disappear Sep 29 11:14:04.278: INFO: Pod pod-secrets-92e0ef25-3ece-4b4c-b347-7414b31adf1a no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:14:04.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8325" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":115,"skipped":1779,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:14:04.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 29 11:14:04.358: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6666551f-4e96-4638-8561-b4539ed3db43" in namespace "downward-api-5521" to be "Succeeded or Failed" Sep 29 11:14:04.437: INFO: Pod "downwardapi-volume-6666551f-4e96-4638-8561-b4539ed3db43": Phase="Pending", Reason="", readiness=false. Elapsed: 79.370042ms Sep 29 11:14:06.442: INFO: Pod "downwardapi-volume-6666551f-4e96-4638-8561-b4539ed3db43": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084221475s Sep 29 11:14:08.447: INFO: Pod "downwardapi-volume-6666551f-4e96-4638-8561-b4539ed3db43": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.089121194s STEP: Saw pod success Sep 29 11:14:08.447: INFO: Pod "downwardapi-volume-6666551f-4e96-4638-8561-b4539ed3db43" satisfied condition "Succeeded or Failed" Sep 29 11:14:08.450: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-6666551f-4e96-4638-8561-b4539ed3db43 container client-container: STEP: delete the pod Sep 29 11:14:08.605: INFO: Waiting for pod downwardapi-volume-6666551f-4e96-4638-8561-b4539ed3db43 to disappear Sep 29 11:14:08.655: INFO: Pod downwardapi-volume-6666551f-4e96-4638-8561-b4539ed3db43 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:14:08.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5521" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":303,"completed":116,"skipped":1809,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:14:08.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Sep 29 11:14:08.772: INFO: >>> kubeConfig: /root/.kube/config Sep 29 11:14:10.716: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:14:22.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6389" for this suite. • [SLOW TEST:13.852 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":303,"completed":117,"skipped":1829,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:14:22.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should support proxy with --port 0 [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting the proxy server Sep 29 11:14:22.564: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:14:22.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3439" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":303,"completed":118,"skipped":1835,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected combined /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:14:22.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-projected-all-test-volume-4f5f74e2-a5f7-4ebd-8aa0-01fd5bca86b3 STEP: Creating secret with name secret-projected-all-test-volume-bcf745de-57dd-49b2-9106-0c451cbc368d STEP: Creating a pod to test Check all projections for projected volume plugin Sep 29 11:14:22.783: INFO: Waiting up to 5m0s for pod "projected-volume-57fa0bb6-ac0e-41f8-85f3-65d97c55922b" in namespace "projected-7533" to be "Succeeded or Failed" Sep 29 11:14:22.798: INFO: Pod "projected-volume-57fa0bb6-ac0e-41f8-85f3-65d97c55922b": Phase="Pending", Reason="", readiness=false. Elapsed: 14.036025ms Sep 29 11:14:24.802: INFO: Pod "projected-volume-57fa0bb6-ac0e-41f8-85f3-65d97c55922b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018672057s Sep 29 11:14:26.807: INFO: Pod "projected-volume-57fa0bb6-ac0e-41f8-85f3-65d97c55922b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02317468s STEP: Saw pod success Sep 29 11:14:26.807: INFO: Pod "projected-volume-57fa0bb6-ac0e-41f8-85f3-65d97c55922b" satisfied condition "Succeeded or Failed" Sep 29 11:14:26.810: INFO: Trying to get logs from node kali-worker pod projected-volume-57fa0bb6-ac0e-41f8-85f3-65d97c55922b container projected-all-volume-test: STEP: delete the pod Sep 29 11:14:26.854: INFO: Waiting for pod projected-volume-57fa0bb6-ac0e-41f8-85f3-65d97c55922b to disappear Sep 29 11:14:26.859: INFO: Pod projected-volume-57fa0bb6-ac0e-41f8-85f3-65d97c55922b no longer exists [AfterEach] [sig-storage] Projected combined /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:14:26.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7533" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":303,"completed":119,"skipped":1881,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:14:26.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-071518ef-11e4-4544-9003-de41846eebc9 STEP: Creating a pod to test consume configMaps Sep 29 11:14:26.956: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ff509af2-e589-4e53-8460-c30efbfc79cd" in namespace "projected-8601" to be "Succeeded or Failed" Sep 29 11:14:26.961: INFO: Pod "pod-projected-configmaps-ff509af2-e589-4e53-8460-c30efbfc79cd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.728701ms Sep 29 11:14:28.964: INFO: Pod "pod-projected-configmaps-ff509af2-e589-4e53-8460-c30efbfc79cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007986925s Sep 29 11:14:30.969: INFO: Pod "pod-projected-configmaps-ff509af2-e589-4e53-8460-c30efbfc79cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012790175s STEP: Saw pod success Sep 29 11:14:30.969: INFO: Pod "pod-projected-configmaps-ff509af2-e589-4e53-8460-c30efbfc79cd" satisfied condition "Succeeded or Failed" Sep 29 11:14:30.972: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-ff509af2-e589-4e53-8460-c30efbfc79cd container projected-configmap-volume-test: STEP: delete the pod Sep 29 11:14:31.005: INFO: Waiting for pod pod-projected-configmaps-ff509af2-e589-4e53-8460-c30efbfc79cd to disappear Sep 29 11:14:31.010: INFO: Pod pod-projected-configmaps-ff509af2-e589-4e53-8460-c30efbfc79cd no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:14:31.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8601" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":120,"skipped":1891,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:14:31.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Sep 29 11:14:35.619: INFO: Successfully updated pod "annotationupdate570af11b-a76e-4394-b9f7-05e2d16b3805" [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:14:37.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7211" for this suite. • [SLOW TEST:6.672 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":303,"completed":121,"skipped":1901,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:14:37.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Sep 29 11:14:37.818: INFO: Waiting up to 1m0s for all nodes to be ready Sep 29 11:15:37.840: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:15:37.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:487 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. Sep 29 11:15:41.953: INFO: found a healthy node: kali-worker [It] runs ReplicaSets to verify preemption running path [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 29 11:16:02.164: INFO: pods created so far: [1 1 1] Sep 29 11:16:02.164: INFO: length of pods created so far: 3 Sep 29 11:16:10.173: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:16:17.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-2241" for this suite. [AfterEach] PreemptionExecutionPath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:461 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:16:17.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-2875" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:99.641 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:450 runs ReplicaSets to verify preemption running path [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":303,"completed":122,"skipped":1919,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:16:17.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-ec4c5d0a-5887-411b-bac1-f33a2ed0df3b STEP: Creating a pod to test consume configMaps Sep 29 11:16:17.448: INFO: Waiting up to 5m0s for pod "pod-configmaps-8c2e4969-a30c-46f8-9319-a42e7babad21" in namespace "configmap-9964" to be "Succeeded or Failed" Sep 29 11:16:17.462: INFO: Pod "pod-configmaps-8c2e4969-a30c-46f8-9319-a42e7babad21": Phase="Pending", Reason="", readiness=false. Elapsed: 14.075919ms Sep 29 11:16:19.510: INFO: Pod "pod-configmaps-8c2e4969-a30c-46f8-9319-a42e7babad21": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062000926s Sep 29 11:16:21.516: INFO: Pod "pod-configmaps-8c2e4969-a30c-46f8-9319-a42e7babad21": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.067461603s STEP: Saw pod success Sep 29 11:16:21.516: INFO: Pod "pod-configmaps-8c2e4969-a30c-46f8-9319-a42e7babad21" satisfied condition "Succeeded or Failed" Sep 29 11:16:21.519: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-8c2e4969-a30c-46f8-9319-a42e7babad21 container configmap-volume-test: STEP: delete the pod Sep 29 11:16:21.566: INFO: Waiting for pod pod-configmaps-8c2e4969-a30c-46f8-9319-a42e7babad21 to disappear Sep 29 11:16:21.585: INFO: Pod pod-configmaps-8c2e4969-a30c-46f8-9319-a42e7babad21 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:16:21.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9964" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":303,"completed":123,"skipped":1958,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:16:21.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Sep 29 11:16:21.672: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:16:28.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1923" for this suite. • [SLOW TEST:6.777 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":303,"completed":124,"skipped":1990,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:16:28.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-ab872323-d569-49a0-9cb5-a8dd56a0aec2 in namespace container-probe-9425 Sep 29 11:16:32.509: INFO: Started pod liveness-ab872323-d569-49a0-9cb5-a8dd56a0aec2 in namespace container-probe-9425 STEP: checking the pod's current state and verifying that restartCount is present Sep 29 11:16:32.511: INFO: Initial restart count of pod liveness-ab872323-d569-49a0-9cb5-a8dd56a0aec2 is 0 Sep 29 11:16:54.567: INFO: Restart count of pod container-probe-9425/liveness-ab872323-d569-49a0-9cb5-a8dd56a0aec2 is now 1 (22.055874809s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:16:54.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9425" for this suite. • [SLOW TEST:26.256 seconds] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":303,"completed":125,"skipped":2000,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:16:54.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 29 11:16:55.857: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 29 11:16:57.866: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736975015, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736975015, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736975015, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736975015, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 29 11:17:00.893: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 29 11:17:00.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5344-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:17:02.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-914" for this suite. STEP: Destroying namespace "webhook-914-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.579 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":303,"completed":126,"skipped":2004,"failed":0} SSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:17:02.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-597 [It] Should recreate evicted statefulset [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-597 STEP: Creating statefulset with conflicting port in namespace statefulset-597 STEP: Waiting until pod test-pod will start running in namespace statefulset-597 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-597 Sep 29 11:17:06.387: INFO: Observed stateful pod in namespace: statefulset-597, name: ss-0, uid: acc52ee9-be23-4f6a-9dc5-93424993a89b, status phase: Pending. Waiting for statefulset controller to delete. Sep 29 11:17:06.766: INFO: Observed stateful pod in namespace: statefulset-597, name: ss-0, uid: acc52ee9-be23-4f6a-9dc5-93424993a89b, status phase: Failed. Waiting for statefulset controller to delete. Sep 29 11:17:06.801: INFO: Observed stateful pod in namespace: statefulset-597, name: ss-0, uid: acc52ee9-be23-4f6a-9dc5-93424993a89b, status phase: Failed. Waiting for statefulset controller to delete. Sep 29 11:17:06.815: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-597 STEP: Removing pod with conflicting port in namespace statefulset-597 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-597 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Sep 29 11:17:10.953: INFO: Deleting all statefulset in ns statefulset-597 Sep 29 11:17:10.956: INFO: Scaling statefulset ss to 0 Sep 29 11:17:20.979: INFO: Waiting for statefulset status.replicas updated to 0 Sep 29 11:17:20.982: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:17:20.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-597" for this suite. • [SLOW TEST:18.804 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Should recreate evicted statefulset [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":303,"completed":127,"skipped":2012,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] PodTemplates /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:17:21.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-node] PodTemplates /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:17:21.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-1934" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":303,"completed":128,"skipped":2050,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:17:21.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 29 11:17:21.283: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-6c1f7320-2f44-49cd-85d1-ed8ad35ad445" in namespace "security-context-test-1631" to be "Succeeded or Failed" Sep 29 11:17:21.310: INFO: Pod "busybox-readonly-false-6c1f7320-2f44-49cd-85d1-ed8ad35ad445": Phase="Pending", Reason="", readiness=false. Elapsed: 26.995083ms Sep 29 11:17:23.315: INFO: Pod "busybox-readonly-false-6c1f7320-2f44-49cd-85d1-ed8ad35ad445": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031802695s Sep 29 11:17:25.321: INFO: Pod "busybox-readonly-false-6c1f7320-2f44-49cd-85d1-ed8ad35ad445": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037542085s Sep 29 11:17:25.321: INFO: Pod "busybox-readonly-false-6c1f7320-2f44-49cd-85d1-ed8ad35ad445" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:17:25.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1631" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":303,"completed":129,"skipped":2058,"failed":0} SSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:17:25.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 29 11:17:25.395: INFO: Create a RollingUpdate DaemonSet Sep 29 11:17:25.399: INFO: Check that daemon pods launch on every node of the cluster Sep 29 11:17:25.435: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 11:17:25.437: INFO: Number of nodes with available pods: 0 Sep 29 11:17:25.437: INFO: Node kali-worker is running more than one daemon pod Sep 29 11:17:26.555: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 11:17:26.558: INFO: Number of nodes with available pods: 0 Sep 29 11:17:26.558: INFO: Node kali-worker is running more than one daemon pod Sep 29 11:17:27.674: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 11:17:27.678: INFO: Number of nodes with available pods: 0 Sep 29 11:17:27.678: INFO: Node kali-worker is running more than one daemon pod Sep 29 11:17:28.442: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 11:17:28.445: INFO: Number of nodes with available pods: 0 Sep 29 11:17:28.445: INFO: Node kali-worker is running more than one daemon pod Sep 29 11:17:29.444: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 11:17:29.448: INFO: Number of nodes with available pods: 1 Sep 29 11:17:29.448: INFO: Node kali-worker2 is running more than one daemon pod Sep 29 11:17:30.476: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 11:17:30.479: INFO: Number of nodes with available pods: 2 Sep 29 11:17:30.479: INFO: Number of running nodes: 2, number of available pods: 2 Sep 29 11:17:30.479: INFO: Update the DaemonSet to trigger a rollout Sep 29 11:17:30.496: INFO: Updating DaemonSet daemon-set Sep 29 11:17:39.540: INFO: Roll back the DaemonSet before rollout is complete Sep 29 11:17:39.549: INFO: Updating DaemonSet daemon-set Sep 29 11:17:39.549: INFO: Make sure DaemonSet rollback is complete Sep 29 11:17:39.568: INFO: Wrong image for pod: daemon-set-zq6ln. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Sep 29 11:17:39.568: INFO: Pod daemon-set-zq6ln is not available Sep 29 11:17:39.581: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 11:17:40.590: INFO: Wrong image for pod: daemon-set-zq6ln. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Sep 29 11:17:40.590: INFO: Pod daemon-set-zq6ln is not available Sep 29 11:17:40.594: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 11:17:41.586: INFO: Pod daemon-set-xkzbs is not available Sep 29 11:17:41.591: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4568, will wait for the garbage collector to delete the pods Sep 29 11:17:41.697: INFO: Deleting DaemonSet.extensions daemon-set took: 46.038509ms Sep 29 11:17:42.097: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.195215ms Sep 29 11:17:48.703: INFO: Number of nodes with available pods: 0 Sep 29 11:17:48.703: INFO: Number of running nodes: 0, number of available pods: 0 Sep 29 11:17:48.706: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4568/daemonsets","resourceVersion":"1606747"},"items":null} Sep 29 11:17:48.708: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4568/pods","resourceVersion":"1606747"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:17:48.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4568" for this suite. • [SLOW TEST:23.397 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":303,"completed":130,"skipped":2065,"failed":0} S ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:17:48.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Sep 29 11:17:48.868: INFO: Waiting up to 1m0s for all nodes to be ready Sep 29 11:18:48.887: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create pods that use 2/3 of node resources. Sep 29 11:18:48.924: INFO: Created pod: pod0-sched-preemption-low-priority Sep 29 11:18:48.995: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a high priority pod that has same requirements as that of lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:19:23.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-5865" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:94.380 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates basic preemption works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":303,"completed":131,"skipped":2066,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:19:23.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:19:27.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9986" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":132,"skipped":2076,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Events /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:19:27.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Sep 29 11:19:33.373: INFO: &Pod{ObjectMeta:{send-events-dc81ec68-265d-4981-9c4b-56a5a9f8a64e events-4262 /api/v1/namespaces/events-4262/pods/send-events-dc81ec68-265d-4981-9c4b-56a5a9f8a64e 4ff8d483-21cc-43de-9edb-a922d9f41eae 1607182 0 2020-09-29 11:19:27 +0000 UTC map[name:foo time:324347488] map[] [] [] [{e2e.test Update v1 2020-09-29 11:19:27 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-29 11:19:31 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.77\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fscx2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fscx2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:p,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fscx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 11:19:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 11:19:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 11:19:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 11:19:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.1.77,StartTime:2020-09-29 11:19:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-29 11:19:30 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://18121568fa98f1a4ba6649a2415f901a2fe1f0eb740b7f1b967a7652267b99e4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.77,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Sep 29 11:19:35.378: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Sep 29 11:19:37.382: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:19:37.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-4262" for this suite. • [SLOW TEST:10.209 seconds] [k8s.io] [sig-node] Events /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":303,"completed":133,"skipped":2097,"failed":0} [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:19:37.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD Sep 29 11:19:37.502: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:19:53.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3495" for this suite. • [SLOW TEST:16.303 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":303,"completed":134,"skipped":2097,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:19:53.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-56fdb439-4464-47ea-be7e-175ae28fea7d STEP: Creating a pod to test consume secrets Sep 29 11:19:53.888: INFO: Waiting up to 5m0s for pod "pod-secrets-fc668b51-47ec-41f1-a884-8ab2bb6fc01f" in namespace "secrets-6466" to be "Succeeded or Failed" Sep 29 11:19:53.902: INFO: Pod "pod-secrets-fc668b51-47ec-41f1-a884-8ab2bb6fc01f": Phase="Pending", Reason="", readiness=false. Elapsed: 13.714126ms Sep 29 11:19:55.982: INFO: Pod "pod-secrets-fc668b51-47ec-41f1-a884-8ab2bb6fc01f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093665312s Sep 29 11:19:57.986: INFO: Pod "pod-secrets-fc668b51-47ec-41f1-a884-8ab2bb6fc01f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.097751738s STEP: Saw pod success Sep 29 11:19:57.986: INFO: Pod "pod-secrets-fc668b51-47ec-41f1-a884-8ab2bb6fc01f" satisfied condition "Succeeded or Failed" Sep 29 11:19:57.989: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-fc668b51-47ec-41f1-a884-8ab2bb6fc01f container secret-volume-test: STEP: delete the pod Sep 29 11:19:58.081: INFO: Waiting for pod pod-secrets-fc668b51-47ec-41f1-a884-8ab2bb6fc01f to disappear Sep 29 11:19:58.119: INFO: Pod pod-secrets-fc668b51-47ec-41f1-a884-8ab2bb6fc01f no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:19:58.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6466" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":135,"skipped":2133,"failed":0} ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:19:58.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs Sep 29 11:19:58.181: INFO: Waiting up to 5m0s for pod "pod-6f4dedef-38ee-4d08-8b97-fcca9ba82280" in namespace "emptydir-1289" to be "Succeeded or Failed" Sep 29 11:19:58.183: INFO: Pod "pod-6f4dedef-38ee-4d08-8b97-fcca9ba82280": Phase="Pending", Reason="", readiness=false. Elapsed: 1.944897ms Sep 29 11:20:00.219: INFO: Pod "pod-6f4dedef-38ee-4d08-8b97-fcca9ba82280": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037920366s Sep 29 11:20:02.223: INFO: Pod "pod-6f4dedef-38ee-4d08-8b97-fcca9ba82280": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04237364s STEP: Saw pod success Sep 29 11:20:02.223: INFO: Pod "pod-6f4dedef-38ee-4d08-8b97-fcca9ba82280" satisfied condition "Succeeded or Failed" Sep 29 11:20:02.226: INFO: Trying to get logs from node kali-worker2 pod pod-6f4dedef-38ee-4d08-8b97-fcca9ba82280 container test-container: STEP: delete the pod Sep 29 11:20:02.244: INFO: Waiting for pod pod-6f4dedef-38ee-4d08-8b97-fcca9ba82280 to disappear Sep 29 11:20:02.248: INFO: Pod pod-6f4dedef-38ee-4d08-8b97-fcca9ba82280 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:20:02.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1289" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":136,"skipped":2133,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:20:02.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-e1561d70-7ac1-44f4-8fed-5489b915e4b4 STEP: Creating a pod to test consume secrets Sep 29 11:20:02.358: INFO: Waiting up to 5m0s for pod "pod-secrets-e06cfbaf-75f2-4773-8e2d-a6f363332ca4" in namespace "secrets-8410" to be "Succeeded or Failed" Sep 29 11:20:02.362: INFO: Pod "pod-secrets-e06cfbaf-75f2-4773-8e2d-a6f363332ca4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.136657ms Sep 29 11:20:04.367: INFO: Pod "pod-secrets-e06cfbaf-75f2-4773-8e2d-a6f363332ca4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009380087s Sep 29 11:20:06.375: INFO: Pod "pod-secrets-e06cfbaf-75f2-4773-8e2d-a6f363332ca4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017047339s STEP: Saw pod success Sep 29 11:20:06.375: INFO: Pod "pod-secrets-e06cfbaf-75f2-4773-8e2d-a6f363332ca4" satisfied condition "Succeeded or Failed" Sep 29 11:20:06.377: INFO: Trying to get logs from node kali-worker pod pod-secrets-e06cfbaf-75f2-4773-8e2d-a6f363332ca4 container secret-volume-test: STEP: delete the pod Sep 29 11:20:06.408: INFO: Waiting for pod pod-secrets-e06cfbaf-75f2-4773-8e2d-a6f363332ca4 to disappear Sep 29 11:20:06.430: INFO: Pod pod-secrets-e06cfbaf-75f2-4773-8e2d-a6f363332ca4 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:20:06.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8410" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":137,"skipped":2149,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:20:06.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Sep 29 11:20:06.483: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:20:14.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9366" for this suite. • [SLOW TEST:7.600 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":303,"completed":138,"skipped":2160,"failed":0} [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:20:14.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:20:30.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7847" for this suite. • [SLOW TEST:16.288 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":303,"completed":139,"skipped":2160,"failed":0} SSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:20:30.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-8736, will wait for the garbage collector to delete the pods Sep 29 11:20:36.485: INFO: Deleting Job.batch foo took: 7.621837ms Sep 29 11:20:36.985: INFO: Terminating Job.batch foo pods took: 500.248184ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:21:18.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-8736" for this suite. • [SLOW TEST:48.370 seconds] [sig-apps] Job /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":303,"completed":140,"skipped":2167,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:21:18.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 29 11:21:18.773: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:21:19.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2871" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":303,"completed":141,"skipped":2178,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:21:19.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 29 11:21:20.009: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 29 11:21:22.109: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736975280, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736975280, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736975280, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736975279, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 29 11:21:24.195: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736975280, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736975280, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736975280, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736975279, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 29 11:21:27.188: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Sep 29 11:21:27.216: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:21:27.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1333" for this suite. STEP: Destroying namespace "webhook-1333-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.935 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":303,"completed":142,"skipped":2211,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:21:27.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's args Sep 29 11:21:27.403: INFO: Waiting up to 5m0s for pod "var-expansion-1a952d4e-1ccf-447f-814f-e4d6fd730adb" in namespace "var-expansion-4388" to be "Succeeded or Failed" Sep 29 11:21:27.444: INFO: Pod "var-expansion-1a952d4e-1ccf-447f-814f-e4d6fd730adb": Phase="Pending", Reason="", readiness=false. Elapsed: 41.45412ms Sep 29 11:21:29.449: INFO: Pod "var-expansion-1a952d4e-1ccf-447f-814f-e4d6fd730adb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045473759s Sep 29 11:21:31.643: INFO: Pod "var-expansion-1a952d4e-1ccf-447f-814f-e4d6fd730adb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.239978935s STEP: Saw pod success Sep 29 11:21:31.643: INFO: Pod "var-expansion-1a952d4e-1ccf-447f-814f-e4d6fd730adb" satisfied condition "Succeeded or Failed" Sep 29 11:21:31.646: INFO: Trying to get logs from node kali-worker pod var-expansion-1a952d4e-1ccf-447f-814f-e4d6fd730adb container dapi-container: STEP: delete the pod Sep 29 11:21:31.666: INFO: Waiting for pod var-expansion-1a952d4e-1ccf-447f-814f-e4d6fd730adb to disappear Sep 29 11:21:31.670: INFO: Pod var-expansion-1a952d4e-1ccf-447f-814f-e4d6fd730adb no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:21:31.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4388" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":303,"completed":143,"skipped":2232,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:21:31.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:21:48.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7706" for this suite. • [SLOW TEST:17.104 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":303,"completed":144,"skipped":2250,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:21:48.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium Sep 29 11:21:48.867: INFO: Waiting up to 5m0s for pod "pod-6f45d7e5-99b4-4706-bbf3-8988db5628f2" in namespace "emptydir-567" to be "Succeeded or Failed" Sep 29 11:21:48.894: INFO: Pod "pod-6f45d7e5-99b4-4706-bbf3-8988db5628f2": Phase="Pending", Reason="", readiness=false. Elapsed: 26.731363ms Sep 29 11:21:50.898: INFO: Pod "pod-6f45d7e5-99b4-4706-bbf3-8988db5628f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030537633s Sep 29 11:21:52.949: INFO: Pod "pod-6f45d7e5-99b4-4706-bbf3-8988db5628f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.082060488s STEP: Saw pod success Sep 29 11:21:52.949: INFO: Pod "pod-6f45d7e5-99b4-4706-bbf3-8988db5628f2" satisfied condition "Succeeded or Failed" Sep 29 11:21:52.953: INFO: Trying to get logs from node kali-worker pod pod-6f45d7e5-99b4-4706-bbf3-8988db5628f2 container test-container: STEP: delete the pod Sep 29 11:21:52.972: INFO: Waiting for pod pod-6f45d7e5-99b4-4706-bbf3-8988db5628f2 to disappear Sep 29 11:21:52.976: INFO: Pod pod-6f45d7e5-99b4-4706-bbf3-8988db5628f2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:21:52.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-567" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":145,"skipped":2286,"failed":0} SSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:21:52.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-1d2d6973-f4d9-4b61-ac8c-80272cb70e71 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-1d2d6973-f4d9-4b61-ac8c-80272cb70e71 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:21:59.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1715" for this suite. • [SLOW TEST:6.334 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":146,"skipped":2291,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:21:59.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:22:15.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8973" for this suite. • [SLOW TEST:16.361 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":303,"completed":147,"skipped":2298,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:22:15.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium Sep 29 11:22:15.783: INFO: Waiting up to 5m0s for pod "pod-4edb5ddf-2137-429f-980b-27e1b7d90d65" in namespace "emptydir-9737" to be "Succeeded or Failed" Sep 29 11:22:15.815: INFO: Pod "pod-4edb5ddf-2137-429f-980b-27e1b7d90d65": Phase="Pending", Reason="", readiness=false. Elapsed: 31.564843ms Sep 29 11:22:17.819: INFO: Pod "pod-4edb5ddf-2137-429f-980b-27e1b7d90d65": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035630564s Sep 29 11:22:19.823: INFO: Pod "pod-4edb5ddf-2137-429f-980b-27e1b7d90d65": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039807706s STEP: Saw pod success Sep 29 11:22:19.823: INFO: Pod "pod-4edb5ddf-2137-429f-980b-27e1b7d90d65" satisfied condition "Succeeded or Failed" Sep 29 11:22:19.826: INFO: Trying to get logs from node kali-worker pod pod-4edb5ddf-2137-429f-980b-27e1b7d90d65 container test-container: STEP: delete the pod Sep 29 11:22:19.847: INFO: Waiting for pod pod-4edb5ddf-2137-429f-980b-27e1b7d90d65 to disappear Sep 29 11:22:19.889: INFO: Pod pod-4edb5ddf-2137-429f-980b-27e1b7d90d65 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:22:19.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9737" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":148,"skipped":2299,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:22:19.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:22:26.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-293" for this suite. STEP: Destroying namespace "nsdeletetest-6261" for this suite. Sep 29 11:22:26.315: INFO: Namespace nsdeletetest-6261 was already deleted STEP: Destroying namespace "nsdeletetest-9963" for this suite. • [SLOW TEST:6.423 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":303,"completed":149,"skipped":2310,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:22:26.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:161 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:22:26.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8382" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":303,"completed":150,"skipped":2332,"failed":0} S ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:22:26.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0929 11:22:27.750277 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Sep 29 11:23:29.962: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:23:29.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5489" for this suite. • [SLOW TEST:63.378 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":303,"completed":151,"skipped":2333,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:23:29.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Sep 29 11:23:30.054: INFO: Waiting up to 5m0s for pod "downward-api-c4738668-93c0-4cf2-8381-e0d90b8af7a3" in namespace "downward-api-9685" to be "Succeeded or Failed" Sep 29 11:23:30.070: INFO: Pod "downward-api-c4738668-93c0-4cf2-8381-e0d90b8af7a3": Phase="Pending", Reason="", readiness=false. Elapsed: 15.943188ms Sep 29 11:23:32.120: INFO: Pod "downward-api-c4738668-93c0-4cf2-8381-e0d90b8af7a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066426576s Sep 29 11:23:34.125: INFO: Pod "downward-api-c4738668-93c0-4cf2-8381-e0d90b8af7a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.071474014s STEP: Saw pod success Sep 29 11:23:34.126: INFO: Pod "downward-api-c4738668-93c0-4cf2-8381-e0d90b8af7a3" satisfied condition "Succeeded or Failed" Sep 29 11:23:34.129: INFO: Trying to get logs from node kali-worker pod downward-api-c4738668-93c0-4cf2-8381-e0d90b8af7a3 container dapi-container: STEP: delete the pod Sep 29 11:23:34.168: INFO: Waiting for pod downward-api-c4738668-93c0-4cf2-8381-e0d90b8af7a3 to disappear Sep 29 11:23:34.180: INFO: Pod downward-api-c4738668-93c0-4cf2-8381-e0d90b8af7a3 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:23:34.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9685" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":303,"completed":152,"skipped":2374,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:23:34.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token Sep 29 11:23:34.828: INFO: created pod pod-service-account-defaultsa Sep 29 11:23:34.828: INFO: pod pod-service-account-defaultsa service account token volume mount: true Sep 29 11:23:34.833: INFO: created pod pod-service-account-mountsa Sep 29 11:23:34.833: INFO: pod pod-service-account-mountsa service account token volume mount: true Sep 29 11:23:34.927: INFO: created pod pod-service-account-nomountsa Sep 29 11:23:34.927: INFO: pod pod-service-account-nomountsa service account token volume mount: false Sep 29 11:23:34.947: INFO: created pod pod-service-account-defaultsa-mountspec Sep 29 11:23:34.947: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Sep 29 11:23:35.006: INFO: created pod pod-service-account-mountsa-mountspec Sep 29 11:23:35.006: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Sep 29 11:23:35.053: INFO: created pod pod-service-account-nomountsa-mountspec Sep 29 11:23:35.053: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Sep 29 11:23:35.080: INFO: created pod pod-service-account-defaultsa-nomountspec Sep 29 11:23:35.080: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Sep 29 11:23:35.124: INFO: created pod pod-service-account-mountsa-nomountspec Sep 29 11:23:35.124: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Sep 29 11:23:35.178: INFO: created pod pod-service-account-nomountsa-nomountspec Sep 29 11:23:35.178: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:23:35.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-8256" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":303,"completed":153,"skipped":2397,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:23:35.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-218 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-218;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-218 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-218;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-218.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-218.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-218.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-218.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-218.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-218.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-218.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-218.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-218.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-218.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-218.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-218.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-218.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 151.15.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.15.151_udp@PTR;check="$$(dig +tcp +noall +answer +search 151.15.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.15.151_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-218 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-218;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-218 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-218;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-218.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-218.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-218.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-218.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-218.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-218.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-218.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-218.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-218.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-218.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-218.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-218.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-218.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 151.15.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.15.151_udp@PTR;check="$$(dig +tcp +noall +answer +search 151.15.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.15.151_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 29 11:23:51.034: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:23:51.068: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:23:51.074: INFO: Unable to read wheezy_udp@dns-test-service.dns-218 from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:23:51.077: INFO: Unable to read wheezy_tcp@dns-test-service.dns-218 from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:23:51.080: INFO: Unable to read wheezy_udp@dns-test-service.dns-218.svc from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:23:51.083: INFO: Unable to read wheezy_tcp@dns-test-service.dns-218.svc from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:23:51.086: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-218.svc from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:23:51.089: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-218.svc from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:23:51.132: INFO: Unable to read jessie_udp@dns-test-service from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:23:51.135: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:23:51.138: INFO: Unable to read jessie_udp@dns-test-service.dns-218 from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:23:51.140: INFO: Unable to read jessie_tcp@dns-test-service.dns-218 from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:23:51.143: INFO: Unable to read jessie_udp@dns-test-service.dns-218.svc from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:23:51.146: INFO: Unable to read jessie_tcp@dns-test-service.dns-218.svc from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:23:51.148: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-218.svc from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:23:51.151: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-218.svc from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:23:51.170: INFO: Lookups using dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-218 wheezy_tcp@dns-test-service.dns-218 wheezy_udp@dns-test-service.dns-218.svc wheezy_tcp@dns-test-service.dns-218.svc wheezy_udp@_http._tcp.dns-test-service.dns-218.svc wheezy_tcp@_http._tcp.dns-test-service.dns-218.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-218 jessie_tcp@dns-test-service.dns-218 jessie_udp@dns-test-service.dns-218.svc jessie_tcp@dns-test-service.dns-218.svc jessie_udp@_http._tcp.dns-test-service.dns-218.svc jessie_tcp@_http._tcp.dns-test-service.dns-218.svc] Sep 29 11:23:56.174: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:23:56.177: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:23:56.180: INFO: Unable to read wheezy_udp@dns-test-service.dns-218 from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:23:56.183: INFO: Unable to read wheezy_tcp@dns-test-service.dns-218 from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:23:56.186: INFO: Unable to read wheezy_udp@dns-test-service.dns-218.svc from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:23:56.188: INFO: Unable to read wheezy_tcp@dns-test-service.dns-218.svc from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:23:56.192: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-218.svc from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:23:56.195: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-218.svc from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:23:56.218: INFO: Unable to read jessie_udp@dns-test-service from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:23:56.222: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:23:56.225: INFO: Unable to read jessie_udp@dns-test-service.dns-218 from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:23:56.228: INFO: Unable to read jessie_tcp@dns-test-service.dns-218 from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:23:56.231: INFO: Unable to read jessie_udp@dns-test-service.dns-218.svc from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:23:56.234: INFO: Unable to read jessie_tcp@dns-test-service.dns-218.svc from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:23:56.237: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-218.svc from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:23:56.239: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-218.svc from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:23:56.257: INFO: Lookups using dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-218 wheezy_tcp@dns-test-service.dns-218 wheezy_udp@dns-test-service.dns-218.svc wheezy_tcp@dns-test-service.dns-218.svc wheezy_udp@_http._tcp.dns-test-service.dns-218.svc wheezy_tcp@_http._tcp.dns-test-service.dns-218.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-218 jessie_tcp@dns-test-service.dns-218 jessie_udp@dns-test-service.dns-218.svc jessie_tcp@dns-test-service.dns-218.svc jessie_udp@_http._tcp.dns-test-service.dns-218.svc jessie_tcp@_http._tcp.dns-test-service.dns-218.svc] Sep 29 11:24:01.175: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:24:01.178: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:24:01.182: INFO: Unable to read wheezy_udp@dns-test-service.dns-218 from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:24:01.185: INFO: Unable to read wheezy_tcp@dns-test-service.dns-218 from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:24:01.188: INFO: Unable to read wheezy_udp@dns-test-service.dns-218.svc from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:24:01.190: INFO: Unable to read wheezy_tcp@dns-test-service.dns-218.svc from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:24:01.193: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-218.svc from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:24:01.195: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-218.svc from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:24:01.215: INFO: Unable to read jessie_udp@dns-test-service from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:24:01.218: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:24:01.221: INFO: Unable to read jessie_udp@dns-test-service.dns-218 from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:24:01.223: INFO: Unable to read jessie_tcp@dns-test-service.dns-218 from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:24:01.227: INFO: Unable to read jessie_udp@dns-test-service.dns-218.svc from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:24:01.231: INFO: Unable to read jessie_tcp@dns-test-service.dns-218.svc from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:24:01.234: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-218.svc from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:24:01.237: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-218.svc from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:24:01.258: INFO: Lookups using dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-218 wheezy_tcp@dns-test-service.dns-218 wheezy_udp@dns-test-service.dns-218.svc wheezy_tcp@dns-test-service.dns-218.svc wheezy_udp@_http._tcp.dns-test-service.dns-218.svc wheezy_tcp@_http._tcp.dns-test-service.dns-218.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-218 jessie_tcp@dns-test-service.dns-218 jessie_udp@dns-test-service.dns-218.svc jessie_tcp@dns-test-service.dns-218.svc jessie_udp@_http._tcp.dns-test-service.dns-218.svc jessie_tcp@_http._tcp.dns-test-service.dns-218.svc] Sep 29 11:24:06.176: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:24:06.181: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:24:06.185: INFO: Unable to read wheezy_udp@dns-test-service.dns-218 from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:24:06.188: INFO: Unable to read wheezy_tcp@dns-test-service.dns-218 from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:24:06.190: INFO: Unable to read wheezy_udp@dns-test-service.dns-218.svc from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:24:06.192: INFO: Unable to read wheezy_tcp@dns-test-service.dns-218.svc from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:24:06.194: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-218.svc from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:24:06.197: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-218.svc from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:24:06.217: INFO: Unable to read jessie_udp@dns-test-service from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:24:06.219: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:24:06.222: INFO: Unable to read jessie_udp@dns-test-service.dns-218 from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:24:06.225: INFO: Unable to read jessie_tcp@dns-test-service.dns-218 from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:24:06.227: INFO: Unable to read jessie_udp@dns-test-service.dns-218.svc from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:24:06.230: INFO: Unable to read jessie_tcp@dns-test-service.dns-218.svc from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:24:06.233: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-218.svc from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:24:06.236: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-218.svc from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:24:06.253: INFO: Lookups using dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-218 wheezy_tcp@dns-test-service.dns-218 wheezy_udp@dns-test-service.dns-218.svc wheezy_tcp@dns-test-service.dns-218.svc wheezy_udp@_http._tcp.dns-test-service.dns-218.svc wheezy_tcp@_http._tcp.dns-test-service.dns-218.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-218 jessie_tcp@dns-test-service.dns-218 jessie_udp@dns-test-service.dns-218.svc jessie_tcp@dns-test-service.dns-218.svc jessie_udp@_http._tcp.dns-test-service.dns-218.svc jessie_tcp@_http._tcp.dns-test-service.dns-218.svc] Sep 29 11:24:11.175: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:24:11.179: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:24:11.182: INFO: Unable to read wheezy_udp@dns-test-service.dns-218 from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:24:11.185: INFO: Unable to read wheezy_tcp@dns-test-service.dns-218 from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:24:11.188: INFO: Unable to read wheezy_udp@dns-test-service.dns-218.svc from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:24:11.191: INFO: Unable to read wheezy_tcp@dns-test-service.dns-218.svc from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:24:11.194: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-218.svc from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:24:11.197: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-218.svc from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:24:11.244: INFO: Unable to read jessie_udp@dns-test-service from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:24:11.246: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:24:11.249: INFO: Unable to read jessie_udp@dns-test-service.dns-218 from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:24:11.252: INFO: Unable to read jessie_tcp@dns-test-service.dns-218 from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:24:11.255: INFO: Unable to read jessie_udp@dns-test-service.dns-218.svc from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:24:11.257: INFO: Unable to read jessie_tcp@dns-test-service.dns-218.svc from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:24:11.260: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-218.svc from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:24:11.263: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-218.svc from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:24:11.280: INFO: Lookups using dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-218 wheezy_tcp@dns-test-service.dns-218 wheezy_udp@dns-test-service.dns-218.svc wheezy_tcp@dns-test-service.dns-218.svc wheezy_udp@_http._tcp.dns-test-service.dns-218.svc wheezy_tcp@_http._tcp.dns-test-service.dns-218.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-218 jessie_tcp@dns-test-service.dns-218 jessie_udp@dns-test-service.dns-218.svc jessie_tcp@dns-test-service.dns-218.svc jessie_udp@_http._tcp.dns-test-service.dns-218.svc jessie_tcp@_http._tcp.dns-test-service.dns-218.svc] Sep 29 11:24:16.175: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:24:16.179: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:24:16.183: INFO: Unable to read wheezy_udp@dns-test-service.dns-218 from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:24:16.186: INFO: Unable to read wheezy_tcp@dns-test-service.dns-218 from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:24:16.189: INFO: Unable to read wheezy_udp@dns-test-service.dns-218.svc from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:24:16.193: INFO: Unable to read wheezy_tcp@dns-test-service.dns-218.svc from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:24:16.196: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-218.svc from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:24:16.199: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-218.svc from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:24:16.222: INFO: Unable to read jessie_udp@dns-test-service from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:24:16.225: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:24:16.228: INFO: Unable to read jessie_udp@dns-test-service.dns-218 from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:24:16.231: INFO: Unable to read jessie_tcp@dns-test-service.dns-218 from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:24:16.234: INFO: Unable to read jessie_udp@dns-test-service.dns-218.svc from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:24:16.237: INFO: Unable to read jessie_tcp@dns-test-service.dns-218.svc from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:24:16.239: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-218.svc from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:24:16.243: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-218.svc from pod dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020: the server could not find the requested resource (get pods dns-test-236508ab-e6ae-4169-9321-daafce7ef020) Sep 29 11:24:16.261: INFO: Lookups using dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-218 wheezy_tcp@dns-test-service.dns-218 wheezy_udp@dns-test-service.dns-218.svc wheezy_tcp@dns-test-service.dns-218.svc wheezy_udp@_http._tcp.dns-test-service.dns-218.svc wheezy_tcp@_http._tcp.dns-test-service.dns-218.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-218 jessie_tcp@dns-test-service.dns-218 jessie_udp@dns-test-service.dns-218.svc jessie_tcp@dns-test-service.dns-218.svc jessie_udp@_http._tcp.dns-test-service.dns-218.svc jessie_tcp@_http._tcp.dns-test-service.dns-218.svc] Sep 29 11:24:21.262: INFO: DNS probes using dns-218/dns-test-236508ab-e6ae-4169-9321-daafce7ef020 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:24:21.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-218" for this suite. • [SLOW TEST:46.741 seconds] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":303,"completed":154,"skipped":2448,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:24:22.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-7440 STEP: creating a selector STEP: Creating the service pods in kubernetes Sep 29 11:24:22.121: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Sep 29 11:24:22.192: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Sep 29 11:24:24.196: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Sep 29 11:24:26.196: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 29 11:24:28.209: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 29 11:24:30.195: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 29 11:24:32.197: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 29 11:24:34.195: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 29 11:24:36.196: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 29 11:24:38.195: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 29 11:24:40.207: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 29 11:24:42.196: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 29 11:24:44.196: INFO: The status of Pod netserver-0 is Running (Ready = true) Sep 29 11:24:44.201: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Sep 29 11:24:48.362: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.103:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7440 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 29 11:24:48.362: INFO: >>> kubeConfig: /root/.kube/config I0929 11:24:48.394565 7 log.go:181] (0xc00001c580) (0xc000f42500) Create stream I0929 11:24:48.394598 7 log.go:181] (0xc00001c580) (0xc000f42500) Stream added, broadcasting: 1 I0929 11:24:48.400131 7 log.go:181] (0xc00001c580) Reply frame received for 1 I0929 11:24:48.400158 7 log.go:181] (0xc00001c580) (0xc0030ada40) Create stream I0929 11:24:48.400169 7 log.go:181] (0xc00001c580) (0xc0030ada40) Stream added, broadcasting: 3 I0929 11:24:48.401394 7 log.go:181] (0xc00001c580) Reply frame received for 3 I0929 11:24:48.401452 7 log.go:181] (0xc00001c580) (0xc0030adae0) Create stream I0929 11:24:48.401469 7 log.go:181] (0xc00001c580) (0xc0030adae0) Stream added, broadcasting: 5 I0929 11:24:48.402333 7 log.go:181] (0xc00001c580) Reply frame received for 5 I0929 11:24:48.478455 7 log.go:181] (0xc00001c580) Data frame received for 5 I0929 11:24:48.478492 7 log.go:181] (0xc0030adae0) (5) Data frame handling I0929 11:24:48.478520 7 log.go:181] (0xc00001c580) Data frame received for 3 I0929 11:24:48.478534 7 log.go:181] (0xc0030ada40) (3) Data frame handling I0929 11:24:48.478553 7 log.go:181] (0xc0030ada40) (3) Data frame sent I0929 11:24:48.478567 7 log.go:181] (0xc00001c580) Data frame received for 3 I0929 11:24:48.478581 7 log.go:181] (0xc0030ada40) (3) Data frame handling I0929 11:24:48.480336 7 log.go:181] (0xc00001c580) Data frame received for 1 I0929 11:24:48.480380 7 log.go:181] (0xc000f42500) (1) Data frame handling I0929 11:24:48.480420 7 log.go:181] (0xc000f42500) (1) Data frame sent I0929 11:24:48.480450 7 log.go:181] (0xc00001c580) (0xc000f42500) Stream removed, broadcasting: 1 I0929 11:24:48.480474 7 log.go:181] (0xc00001c580) Go away received I0929 11:24:48.480599 7 log.go:181] (0xc00001c580) (0xc000f42500) Stream removed, broadcasting: 1 I0929 11:24:48.480646 7 log.go:181] (0xc00001c580) (0xc0030ada40) Stream removed, broadcasting: 3 I0929 11:24:48.480671 7 log.go:181] (0xc00001c580) (0xc0030adae0) Stream removed, broadcasting: 5 Sep 29 11:24:48.480: INFO: Found all expected endpoints: [netserver-0] Sep 29 11:24:48.483: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.87:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7440 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 29 11:24:48.483: INFO: >>> kubeConfig: /root/.kube/config I0929 11:24:48.518316 7 log.go:181] (0xc000192370) (0xc00375f680) Create stream I0929 11:24:48.518347 7 log.go:181] (0xc000192370) (0xc00375f680) Stream added, broadcasting: 1 I0929 11:24:48.520348 7 log.go:181] (0xc000192370) Reply frame received for 1 I0929 11:24:48.520380 7 log.go:181] (0xc000192370) (0xc000f42780) Create stream I0929 11:24:48.520396 7 log.go:181] (0xc000192370) (0xc000f42780) Stream added, broadcasting: 3 I0929 11:24:48.521447 7 log.go:181] (0xc000192370) Reply frame received for 3 I0929 11:24:48.521506 7 log.go:181] (0xc000192370) (0xc006849220) Create stream I0929 11:24:48.521534 7 log.go:181] (0xc000192370) (0xc006849220) Stream added, broadcasting: 5 I0929 11:24:48.522449 7 log.go:181] (0xc000192370) Reply frame received for 5 I0929 11:24:48.593395 7 log.go:181] (0xc000192370) Data frame received for 3 I0929 11:24:48.593492 7 log.go:181] (0xc000f42780) (3) Data frame handling I0929 11:24:48.593534 7 log.go:181] (0xc000f42780) (3) Data frame sent I0929 11:24:48.593545 7 log.go:181] (0xc000192370) Data frame received for 3 I0929 11:24:48.593552 7 log.go:181] (0xc000f42780) (3) Data frame handling I0929 11:24:48.593850 7 log.go:181] (0xc000192370) Data frame received for 5 I0929 11:24:48.593897 7 log.go:181] (0xc006849220) (5) Data frame handling I0929 11:24:48.595342 7 log.go:181] (0xc000192370) Data frame received for 1 I0929 11:24:48.595391 7 log.go:181] (0xc00375f680) (1) Data frame handling I0929 11:24:48.595429 7 log.go:181] (0xc00375f680) (1) Data frame sent I0929 11:24:48.595472 7 log.go:181] (0xc000192370) (0xc00375f680) Stream removed, broadcasting: 1 I0929 11:24:48.595501 7 log.go:181] (0xc000192370) Go away received I0929 11:24:48.595600 7 log.go:181] (0xc000192370) (0xc00375f680) Stream removed, broadcasting: 1 I0929 11:24:48.595631 7 log.go:181] (0xc000192370) (0xc000f42780) Stream removed, broadcasting: 3 I0929 11:24:48.595646 7 log.go:181] (0xc000192370) (0xc006849220) Stream removed, broadcasting: 5 Sep 29 11:24:48.595: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:24:48.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7440" for this suite. • [SLOW TEST:26.535 seconds] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":155,"skipped":2543,"failed":0} SS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:24:48.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 29 11:24:48.676: INFO: Creating deployment "test-recreate-deployment" Sep 29 11:24:48.686: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Sep 29 11:24:48.746: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Sep 29 11:24:50.937: INFO: Waiting deployment "test-recreate-deployment" to complete Sep 29 11:24:50.940: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736975488, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736975488, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736975488, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736975488, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-c96cf48f\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 29 11:24:52.944: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Sep 29 11:24:53.001: INFO: Updating deployment test-recreate-deployment Sep 29 11:24:53.001: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Sep 29 11:24:53.664: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-4454 /apis/apps/v1/namespaces/deployment-4454/deployments/test-recreate-deployment de37bcbb-480c-40a6-8ecb-840a54d07958 1609009 2 2020-09-29 11:24:48 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-09-29 11:24:52 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-09-29 11:24:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004a53ba8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-09-29 11:24:53 +0000 UTC,LastTransitionTime:2020-09-29 11:24:53 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-f79dd4667" is progressing.,LastUpdateTime:2020-09-29 11:24:53 +0000 UTC,LastTransitionTime:2020-09-29 11:24:48 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Sep 29 11:24:53.671: INFO: New ReplicaSet "test-recreate-deployment-f79dd4667" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-f79dd4667 deployment-4454 /apis/apps/v1/namespaces/deployment-4454/replicasets/test-recreate-deployment-f79dd4667 c4b49064-2319-45c3-860b-24fd8406119b 1609007 1 2020-09-29 11:24:53 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment de37bcbb-480c-40a6-8ecb-840a54d07958 0xc00491a090 0xc00491a091}] [] [{kube-controller-manager Update apps/v1 2020-09-29 11:24:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"de37bcbb-480c-40a6-8ecb-840a54d07958\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: f79dd4667,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00491a108 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Sep 29 11:24:53.671: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Sep 29 11:24:53.671: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-c96cf48f deployment-4454 /apis/apps/v1/namespaces/deployment-4454/replicasets/test-recreate-deployment-c96cf48f fce4d62d-4a42-4ba7-8a55-e0c8aa475fe5 1608996 2 2020-09-29 11:24:48 +0000 UTC map[name:sample-pod-3 pod-template-hash:c96cf48f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment de37bcbb-480c-40a6-8ecb-840a54d07958 0xc004a53f9f 0xc004a53fb0}] [] [{kube-controller-manager Update apps/v1 2020-09-29 11:24:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"de37bcbb-480c-40a6-8ecb-840a54d07958\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: c96cf48f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:c96cf48f] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00491a028 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Sep 29 11:24:53.675: INFO: Pod "test-recreate-deployment-f79dd4667-7jtms" is not available: &Pod{ObjectMeta:{test-recreate-deployment-f79dd4667-7jtms test-recreate-deployment-f79dd4667- deployment-4454 /api/v1/namespaces/deployment-4454/pods/test-recreate-deployment-f79dd4667-7jtms e9b36c4b-f7fd-4715-afda-aa29892195e0 1609008 0 2020-09-29 11:24:53 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [{apps/v1 ReplicaSet test-recreate-deployment-f79dd4667 c4b49064-2319-45c3-860b-24fd8406119b 0xc003794740 0xc003794741}] [] [{kube-controller-manager Update v1 2020-09-29 11:24:53 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c4b49064-2319-45c3-860b-24fd8406119b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-29 11:24:53 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n8m2f,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n8m2f,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n8m2f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 11:24:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 11:24:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 11:24:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 11:24:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-09-29 11:24:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:24:53.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4454" for this suite. • [SLOW TEST:5.076 seconds] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":303,"completed":156,"skipped":2545,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:24:53.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Sep 29 11:25:03.894: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Sep 29 11:25:03.899: INFO: Pod pod-with-prestop-http-hook still exists Sep 29 11:25:05.899: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Sep 29 11:25:05.907: INFO: Pod pod-with-prestop-http-hook still exists Sep 29 11:25:07.899: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Sep 29 11:25:07.904: INFO: Pod pod-with-prestop-http-hook still exists Sep 29 11:25:09.899: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Sep 29 11:25:09.904: INFO: Pod pod-with-prestop-http-hook still exists Sep 29 11:25:11.899: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Sep 29 11:25:11.903: INFO: Pod pod-with-prestop-http-hook still exists Sep 29 11:25:13.899: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Sep 29 11:25:13.905: INFO: Pod pod-with-prestop-http-hook still exists Sep 29 11:25:15.899: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Sep 29 11:25:15.904: INFO: Pod pod-with-prestop-http-hook still exists Sep 29 11:25:17.899: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Sep 29 11:25:17.904: INFO: Pod pod-with-prestop-http-hook still exists Sep 29 11:25:19.899: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Sep 29 11:25:19.904: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:25:19.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-164" for this suite. • [SLOW TEST:26.254 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":303,"completed":157,"skipped":2553,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:25:19.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:25:31.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2710" for this suite. • [SLOW TEST:11.275 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":303,"completed":158,"skipped":2578,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:25:31.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-52eafa21-d498-4238-8a91-8f22208a3fc3 STEP: Creating a pod to test consume secrets Sep 29 11:25:31.328: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-282c39d1-3f81-4d6b-9955-b1bf22651540" in namespace "projected-4351" to be "Succeeded or Failed" Sep 29 11:25:31.401: INFO: Pod "pod-projected-secrets-282c39d1-3f81-4d6b-9955-b1bf22651540": Phase="Pending", Reason="", readiness=false. Elapsed: 73.734756ms Sep 29 11:25:33.406: INFO: Pod "pod-projected-secrets-282c39d1-3f81-4d6b-9955-b1bf22651540": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07792122s Sep 29 11:25:35.410: INFO: Pod "pod-projected-secrets-282c39d1-3f81-4d6b-9955-b1bf22651540": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.082224612s STEP: Saw pod success Sep 29 11:25:35.410: INFO: Pod "pod-projected-secrets-282c39d1-3f81-4d6b-9955-b1bf22651540" satisfied condition "Succeeded or Failed" Sep 29 11:25:35.413: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-282c39d1-3f81-4d6b-9955-b1bf22651540 container projected-secret-volume-test: STEP: delete the pod Sep 29 11:25:35.476: INFO: Waiting for pod pod-projected-secrets-282c39d1-3f81-4d6b-9955-b1bf22651540 to disappear Sep 29 11:25:35.487: INFO: Pod pod-projected-secrets-282c39d1-3f81-4d6b-9955-b1bf22651540 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:25:35.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4351" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":159,"skipped":2588,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:25:35.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl logs /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1415 STEP: creating an pod Sep 29 11:25:35.541: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.20 --namespace=kubectl-6517 --restart=Never -- logs-generator --log-lines-total 100 --run-duration 20s' Sep 29 11:25:38.360: INFO: stderr: "" Sep 29 11:25:38.360: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Waiting for log generator to start. Sep 29 11:25:38.360: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Sep 29 11:25:38.360: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-6517" to be "running and ready, or succeeded" Sep 29 11:25:38.389: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 29.258369ms Sep 29 11:25:40.462: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101784026s Sep 29 11:25:42.466: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.105987167s Sep 29 11:25:42.466: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Sep 29 11:25:42.466: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Sep 29 11:25:42.466: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6517' Sep 29 11:25:42.580: INFO: stderr: "" Sep 29 11:25:42.581: INFO: stdout: "I0929 11:25:40.757355 1 logs_generator.go:76] 0 POST /api/v1/namespaces/ns/pods/lsk7 493\nI0929 11:25:40.957492 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/kube-system/pods/jjf 421\nI0929 11:25:41.157551 1 logs_generator.go:76] 2 GET /api/v1/namespaces/ns/pods/55jf 268\nI0929 11:25:41.357505 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/default/pods/dg7 393\nI0929 11:25:41.557572 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/kube-system/pods/pd5k 432\nI0929 11:25:41.757541 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/ns/pods/t9d6 525\nI0929 11:25:41.957512 1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/kb54 422\nI0929 11:25:42.157542 1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/cxz 497\nI0929 11:25:42.357598 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/277 529\nI0929 11:25:42.557476 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/ns/pods/8rjb 243\n" STEP: limiting log lines Sep 29 11:25:42.581: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6517 --tail=1' Sep 29 11:25:42.708: INFO: stderr: "" Sep 29 11:25:42.708: INFO: stdout: "I0929 11:25:42.557476 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/ns/pods/8rjb 243\n" Sep 29 11:25:42.708: INFO: got output "I0929 11:25:42.557476 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/ns/pods/8rjb 243\n" STEP: limiting log bytes Sep 29 11:25:42.708: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6517 --limit-bytes=1' Sep 29 11:25:42.818: INFO: stderr: "" Sep 29 11:25:42.818: INFO: stdout: "I" Sep 29 11:25:42.818: INFO: got output "I" STEP: exposing timestamps Sep 29 11:25:42.818: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6517 --tail=1 --timestamps' Sep 29 11:25:42.926: INFO: stderr: "" Sep 29 11:25:42.926: INFO: stdout: "2020-09-29T11:25:42.757722993Z I0929 11:25:42.757523 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/default/pods/gzf 464\n" Sep 29 11:25:42.926: INFO: got output "2020-09-29T11:25:42.757722993Z I0929 11:25:42.757523 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/default/pods/gzf 464\n" STEP: restricting to a time range Sep 29 11:25:45.427: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6517 --since=1s' Sep 29 11:25:45.531: INFO: stderr: "" Sep 29 11:25:45.531: INFO: stdout: "I0929 11:25:44.557588 1 logs_generator.go:76] 19 GET /api/v1/namespaces/ns/pods/l6lj 339\nI0929 11:25:44.757612 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/default/pods/5qgd 453\nI0929 11:25:44.957566 1 logs_generator.go:76] 21 PUT /api/v1/namespaces/ns/pods/qwc 400\nI0929 11:25:45.157518 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/default/pods/pj6 354\nI0929 11:25:45.357565 1 logs_generator.go:76] 23 POST /api/v1/namespaces/default/pods/qx7j 593\n" Sep 29 11:25:45.531: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6517 --since=24h' Sep 29 11:25:45.662: INFO: stderr: "" Sep 29 11:25:45.662: INFO: stdout: "I0929 11:25:40.757355 1 logs_generator.go:76] 0 POST /api/v1/namespaces/ns/pods/lsk7 493\nI0929 11:25:40.957492 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/kube-system/pods/jjf 421\nI0929 11:25:41.157551 1 logs_generator.go:76] 2 GET /api/v1/namespaces/ns/pods/55jf 268\nI0929 11:25:41.357505 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/default/pods/dg7 393\nI0929 11:25:41.557572 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/kube-system/pods/pd5k 432\nI0929 11:25:41.757541 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/ns/pods/t9d6 525\nI0929 11:25:41.957512 1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/kb54 422\nI0929 11:25:42.157542 1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/cxz 497\nI0929 11:25:42.357598 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/277 529\nI0929 11:25:42.557476 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/ns/pods/8rjb 243\nI0929 11:25:42.757523 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/default/pods/gzf 464\nI0929 11:25:42.957551 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/ns/pods/8rk 246\nI0929 11:25:43.157561 1 logs_generator.go:76] 12 POST /api/v1/namespaces/default/pods/c5s 387\nI0929 11:25:43.357588 1 logs_generator.go:76] 13 GET /api/v1/namespaces/kube-system/pods/8zk 310\nI0929 11:25:43.557588 1 logs_generator.go:76] 14 GET /api/v1/namespaces/ns/pods/7rt8 254\nI0929 11:25:43.757560 1 logs_generator.go:76] 15 POST /api/v1/namespaces/default/pods/dds 350\nI0929 11:25:43.957420 1 logs_generator.go:76] 16 POST /api/v1/namespaces/ns/pods/gdlz 384\nI0929 11:25:44.157535 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/ns/pods/dzsq 464\nI0929 11:25:44.357576 1 logs_generator.go:76] 18 GET /api/v1/namespaces/ns/pods/jpv 324\nI0929 11:25:44.557588 1 logs_generator.go:76] 19 GET /api/v1/namespaces/ns/pods/l6lj 339\nI0929 11:25:44.757612 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/default/pods/5qgd 453\nI0929 11:25:44.957566 1 logs_generator.go:76] 21 PUT /api/v1/namespaces/ns/pods/qwc 400\nI0929 11:25:45.157518 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/default/pods/pj6 354\nI0929 11:25:45.357565 1 logs_generator.go:76] 23 POST /api/v1/namespaces/default/pods/qx7j 593\nI0929 11:25:45.557566 1 logs_generator.go:76] 24 PUT /api/v1/namespaces/default/pods/8xhp 498\n" [AfterEach] Kubectl logs /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1421 Sep 29 11:25:45.663: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-6517' Sep 29 11:25:58.648: INFO: stderr: "" Sep 29 11:25:58.648: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:25:58.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6517" for this suite. • [SLOW TEST:23.182 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1411 should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":303,"completed":160,"skipped":2593,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:25:58.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:26:14.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-9551" for this suite. • [SLOW TEST:16.071 seconds] [sig-apps] Job /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":303,"completed":161,"skipped":2618,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:26:14.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 29 11:26:14.823: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Sep 29 11:26:16.777: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2490 create -f -' Sep 29 11:26:21.471: INFO: stderr: "" Sep 29 11:26:21.471: INFO: stdout: "e2e-test-crd-publish-openapi-4381-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Sep 29 11:26:21.471: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2490 delete e2e-test-crd-publish-openapi-4381-crds test-foo' Sep 29 11:26:21.857: INFO: stderr: "" Sep 29 11:26:21.857: INFO: stdout: "e2e-test-crd-publish-openapi-4381-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Sep 29 11:26:21.857: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2490 apply -f -' Sep 29 11:26:22.411: INFO: stderr: "" Sep 29 11:26:22.411: INFO: stdout: "e2e-test-crd-publish-openapi-4381-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Sep 29 11:26:22.412: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2490 delete e2e-test-crd-publish-openapi-4381-crds test-foo' Sep 29 11:26:22.514: INFO: stderr: "" Sep 29 11:26:22.514: INFO: stdout: "e2e-test-crd-publish-openapi-4381-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Sep 29 11:26:22.514: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2490 create -f -' Sep 29 11:26:22.781: INFO: rc: 1 Sep 29 11:26:22.781: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2490 apply -f -' Sep 29 11:26:23.038: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Sep 29 11:26:23.038: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2490 create -f -' Sep 29 11:26:23.325: INFO: rc: 1 Sep 29 11:26:23.325: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2490 apply -f -' Sep 29 11:26:23.584: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Sep 29 11:26:23.584: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4381-crds' Sep 29 11:26:23.868: INFO: stderr: "" Sep 29 11:26:23.868: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4381-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Sep 29 11:26:23.868: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4381-crds.metadata' Sep 29 11:26:24.298: INFO: stderr: "" Sep 29 11:26:24.299: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4381-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Sep 29 11:26:24.299: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4381-crds.spec' Sep 29 11:26:24.736: INFO: stderr: "" Sep 29 11:26:24.736: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4381-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Sep 29 11:26:24.736: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4381-crds.spec.bars' Sep 29 11:26:25.014: INFO: stderr: "" Sep 29 11:26:25.014: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4381-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Sep 29 11:26:25.015: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4381-crds.spec.bars2' Sep 29 11:26:25.290: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:26:27.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2490" for this suite. • [SLOW TEST:12.475 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":303,"completed":162,"skipped":2628,"failed":0} SSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:26:27.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:26:59.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5278" for this suite. • [SLOW TEST:32.010 seconds] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when starting a container that exits /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":303,"completed":163,"skipped":2631,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:26:59.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-c8404ec6-ac10-4dab-963b-daa5cfe35d43 STEP: Creating a pod to test consume configMaps Sep 29 11:26:59.312: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f7fdfb71-7b13-473c-8475-afb332106b9b" in namespace "projected-9924" to be "Succeeded or Failed" Sep 29 11:26:59.328: INFO: Pod "pod-projected-configmaps-f7fdfb71-7b13-473c-8475-afb332106b9b": Phase="Pending", Reason="", readiness=false. Elapsed: 15.848205ms Sep 29 11:27:01.332: INFO: Pod "pod-projected-configmaps-f7fdfb71-7b13-473c-8475-afb332106b9b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020491985s Sep 29 11:27:03.336: INFO: Pod "pod-projected-configmaps-f7fdfb71-7b13-473c-8475-afb332106b9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024565814s STEP: Saw pod success Sep 29 11:27:03.336: INFO: Pod "pod-projected-configmaps-f7fdfb71-7b13-473c-8475-afb332106b9b" satisfied condition "Succeeded or Failed" Sep 29 11:27:03.339: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-f7fdfb71-7b13-473c-8475-afb332106b9b container projected-configmap-volume-test: STEP: delete the pod Sep 29 11:27:03.378: INFO: Waiting for pod pod-projected-configmaps-f7fdfb71-7b13-473c-8475-afb332106b9b to disappear Sep 29 11:27:03.392: INFO: Pod pod-projected-configmaps-f7fdfb71-7b13-473c-8475-afb332106b9b no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:27:03.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9924" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":303,"completed":164,"skipped":2640,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:27:03.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-t5wr STEP: Creating a pod to test atomic-volume-subpath Sep 29 11:27:03.615: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-t5wr" in namespace "subpath-16" to be "Succeeded or Failed" Sep 29 11:27:03.619: INFO: Pod "pod-subpath-test-configmap-t5wr": Phase="Pending", Reason="", readiness=false. Elapsed: 3.859558ms Sep 29 11:27:05.623: INFO: Pod "pod-subpath-test-configmap-t5wr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007905299s Sep 29 11:27:07.628: INFO: Pod "pod-subpath-test-configmap-t5wr": Phase="Running", Reason="", readiness=true. Elapsed: 4.012792702s Sep 29 11:27:09.634: INFO: Pod "pod-subpath-test-configmap-t5wr": Phase="Running", Reason="", readiness=true. Elapsed: 6.018280653s Sep 29 11:27:11.639: INFO: Pod "pod-subpath-test-configmap-t5wr": Phase="Running", Reason="", readiness=true. Elapsed: 8.023513305s Sep 29 11:27:13.643: INFO: Pod "pod-subpath-test-configmap-t5wr": Phase="Running", Reason="", readiness=true. Elapsed: 10.027858312s Sep 29 11:27:15.648: INFO: Pod "pod-subpath-test-configmap-t5wr": Phase="Running", Reason="", readiness=true. Elapsed: 12.032203733s Sep 29 11:27:17.653: INFO: Pod "pod-subpath-test-configmap-t5wr": Phase="Running", Reason="", readiness=true. Elapsed: 14.037385524s Sep 29 11:27:19.658: INFO: Pod "pod-subpath-test-configmap-t5wr": Phase="Running", Reason="", readiness=true. Elapsed: 16.043010143s Sep 29 11:27:21.664: INFO: Pod "pod-subpath-test-configmap-t5wr": Phase="Running", Reason="", readiness=true. Elapsed: 18.048117747s Sep 29 11:27:23.668: INFO: Pod "pod-subpath-test-configmap-t5wr": Phase="Running", Reason="", readiness=true. Elapsed: 20.052547321s Sep 29 11:27:25.890: INFO: Pod "pod-subpath-test-configmap-t5wr": Phase="Running", Reason="", readiness=true. Elapsed: 22.274580962s Sep 29 11:27:28.497: INFO: Pod "pod-subpath-test-configmap-t5wr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.881275297s STEP: Saw pod success Sep 29 11:27:28.497: INFO: Pod "pod-subpath-test-configmap-t5wr" satisfied condition "Succeeded or Failed" Sep 29 11:27:28.500: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-configmap-t5wr container test-container-subpath-configmap-t5wr: STEP: delete the pod Sep 29 11:27:28.766: INFO: Waiting for pod pod-subpath-test-configmap-t5wr to disappear Sep 29 11:27:28.802: INFO: Pod pod-subpath-test-configmap-t5wr no longer exists STEP: Deleting pod pod-subpath-test-configmap-t5wr Sep 29 11:27:28.802: INFO: Deleting pod "pod-subpath-test-configmap-t5wr" in namespace "subpath-16" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:27:28.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-16" for this suite. • [SLOW TEST:25.381 seconds] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":303,"completed":165,"skipped":2650,"failed":0} SS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:27:28.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-7558 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-7558 I0929 11:27:29.471499 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-7558, replica count: 2 I0929 11:27:32.521894 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0929 11:27:35.522102 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 29 11:27:35.522: INFO: Creating new exec pod Sep 29 11:27:42.544: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-7558 execpodkjhz7 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Sep 29 11:27:42.755: INFO: stderr: "I0929 11:27:42.660169 1863 log.go:181] (0xc00016ee70) (0xc000cfa460) Create stream\nI0929 11:27:42.660206 1863 log.go:181] (0xc00016ee70) (0xc000cfa460) Stream added, broadcasting: 1\nI0929 11:27:42.663675 1863 log.go:181] (0xc00016ee70) Reply frame received for 1\nI0929 11:27:42.663710 1863 log.go:181] (0xc00016ee70) (0xc00081e000) Create stream\nI0929 11:27:42.663724 1863 log.go:181] (0xc00016ee70) (0xc00081e000) Stream added, broadcasting: 3\nI0929 11:27:42.664574 1863 log.go:181] (0xc00016ee70) Reply frame received for 3\nI0929 11:27:42.664599 1863 log.go:181] (0xc00016ee70) (0xc00081e0a0) Create stream\nI0929 11:27:42.664612 1863 log.go:181] (0xc00016ee70) (0xc00081e0a0) Stream added, broadcasting: 5\nI0929 11:27:42.665412 1863 log.go:181] (0xc00016ee70) Reply frame received for 5\nI0929 11:27:42.747945 1863 log.go:181] (0xc00016ee70) Data frame received for 5\nI0929 11:27:42.747971 1863 log.go:181] (0xc00081e0a0) (5) Data frame handling\nI0929 11:27:42.747988 1863 log.go:181] (0xc00081e0a0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0929 11:27:42.748316 1863 log.go:181] (0xc00016ee70) Data frame received for 5\nI0929 11:27:42.748331 1863 log.go:181] (0xc00081e0a0) (5) Data frame handling\nI0929 11:27:42.748347 1863 log.go:181] (0xc00081e0a0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0929 11:27:42.748951 1863 log.go:181] (0xc00016ee70) Data frame received for 3\nI0929 11:27:42.748977 1863 log.go:181] (0xc00081e000) (3) Data frame handling\nI0929 11:27:42.748996 1863 log.go:181] (0xc00016ee70) Data frame received for 5\nI0929 11:27:42.749006 1863 log.go:181] (0xc00081e0a0) (5) Data frame handling\nI0929 11:27:42.750694 1863 log.go:181] (0xc00016ee70) Data frame received for 1\nI0929 11:27:42.750715 1863 log.go:181] (0xc000cfa460) (1) Data frame handling\nI0929 11:27:42.750727 1863 log.go:181] (0xc000cfa460) (1) Data frame sent\nI0929 11:27:42.750794 1863 log.go:181] (0xc00016ee70) (0xc000cfa460) Stream removed, broadcasting: 1\nI0929 11:27:42.750943 1863 log.go:181] (0xc00016ee70) Go away received\nI0929 11:27:42.751144 1863 log.go:181] (0xc00016ee70) (0xc000cfa460) Stream removed, broadcasting: 1\nI0929 11:27:42.751160 1863 log.go:181] (0xc00016ee70) (0xc00081e000) Stream removed, broadcasting: 3\nI0929 11:27:42.751169 1863 log.go:181] (0xc00016ee70) (0xc00081e0a0) Stream removed, broadcasting: 5\n" Sep 29 11:27:42.755: INFO: stdout: "" Sep 29 11:27:42.756: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-7558 execpodkjhz7 -- /bin/sh -x -c nc -zv -t -w 2 10.106.242.109 80' Sep 29 11:27:42.951: INFO: stderr: "I0929 11:27:42.878260 1881 log.go:181] (0xc0005b91e0) (0xc00050fe00) Create stream\nI0929 11:27:42.878303 1881 log.go:181] (0xc0005b91e0) (0xc00050fe00) Stream added, broadcasting: 1\nI0929 11:27:42.882468 1881 log.go:181] (0xc0005b91e0) Reply frame received for 1\nI0929 11:27:42.882535 1881 log.go:181] (0xc0005b91e0) (0xc00050e000) Create stream\nI0929 11:27:42.882571 1881 log.go:181] (0xc0005b91e0) (0xc00050e000) Stream added, broadcasting: 3\nI0929 11:27:42.883637 1881 log.go:181] (0xc0005b91e0) Reply frame received for 3\nI0929 11:27:42.883669 1881 log.go:181] (0xc0005b91e0) (0xc000b96000) Create stream\nI0929 11:27:42.883680 1881 log.go:181] (0xc0005b91e0) (0xc000b96000) Stream added, broadcasting: 5\nI0929 11:27:42.884705 1881 log.go:181] (0xc0005b91e0) Reply frame received for 5\nI0929 11:27:42.947243 1881 log.go:181] (0xc0005b91e0) Data frame received for 3\nI0929 11:27:42.947267 1881 log.go:181] (0xc00050e000) (3) Data frame handling\nI0929 11:27:42.947476 1881 log.go:181] (0xc0005b91e0) Data frame received for 5\nI0929 11:27:42.947505 1881 log.go:181] (0xc000b96000) (5) Data frame handling\nI0929 11:27:42.947524 1881 log.go:181] (0xc000b96000) (5) Data frame sent\nI0929 11:27:42.947535 1881 log.go:181] (0xc0005b91e0) Data frame received for 5\nI0929 11:27:42.947544 1881 log.go:181] (0xc000b96000) (5) Data frame handling\n+ nc -zv -t -w 2 10.106.242.109 80\nConnection to 10.106.242.109 80 port [tcp/http] succeeded!\nI0929 11:27:42.948377 1881 log.go:181] (0xc0005b91e0) Data frame received for 1\nI0929 11:27:42.948394 1881 log.go:181] (0xc00050fe00) (1) Data frame handling\nI0929 11:27:42.948406 1881 log.go:181] (0xc00050fe00) (1) Data frame sent\nI0929 11:27:42.948542 1881 log.go:181] (0xc0005b91e0) (0xc00050fe00) Stream removed, broadcasting: 1\nI0929 11:27:42.948569 1881 log.go:181] (0xc0005b91e0) Go away received\nI0929 11:27:42.949118 1881 log.go:181] (0xc0005b91e0) (0xc00050fe00) Stream removed, broadcasting: 1\nI0929 11:27:42.949142 1881 log.go:181] (0xc0005b91e0) (0xc00050e000) Stream removed, broadcasting: 3\nI0929 11:27:42.949153 1881 log.go:181] (0xc0005b91e0) (0xc000b96000) Stream removed, broadcasting: 5\n" Sep 29 11:27:42.951: INFO: stdout: "" Sep 29 11:27:42.951: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:27:42.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7558" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:14.157 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":303,"completed":166,"skipped":2652,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:27:43.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Sep 29 11:27:43.078: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Sep 29 11:27:43.094: INFO: Waiting for terminating namespaces to be deleted... Sep 29 11:27:43.096: INFO: Logging pods the apiserver thinks is on node kali-worker before test Sep 29 11:27:43.101: INFO: kindnet-pdv4j from kube-system started at 2020-09-23 08:29:08 +0000 UTC (1 container statuses recorded) Sep 29 11:27:43.101: INFO: Container kindnet-cni ready: true, restart count 0 Sep 29 11:27:43.101: INFO: kube-proxy-qsqz8 from kube-system started at 2020-09-23 08:29:08 +0000 UTC (1 container statuses recorded) Sep 29 11:27:43.101: INFO: Container kube-proxy ready: true, restart count 0 Sep 29 11:27:43.101: INFO: externalname-service-42h7s from services-7558 started at 2020-09-29 11:27:30 +0000 UTC (1 container statuses recorded) Sep 29 11:27:43.101: INFO: Container externalname-service ready: true, restart count 0 Sep 29 11:27:43.101: INFO: Logging pods the apiserver thinks is on node kali-worker2 before test Sep 29 11:27:43.106: INFO: kindnet-pgjc7 from kube-system started at 2020-09-23 08:29:08 +0000 UTC (1 container statuses recorded) Sep 29 11:27:43.106: INFO: Container kindnet-cni ready: true, restart count 0 Sep 29 11:27:43.106: INFO: kube-proxy-qhsmg from kube-system started at 2020-09-23 08:29:08 +0000 UTC (1 container statuses recorded) Sep 29 11:27:43.106: INFO: Container kube-proxy ready: true, restart count 0 Sep 29 11:27:43.106: INFO: execpodkjhz7 from services-7558 started at 2020-09-29 11:27:35 +0000 UTC (1 container statuses recorded) Sep 29 11:27:43.106: INFO: Container agnhost-pause ready: true, restart count 0 Sep 29 11:27:43.106: INFO: externalname-service-8fnkb from services-7558 started at 2020-09-29 11:27:29 +0000 UTC (1 container statuses recorded) Sep 29 11:27:43.106: INFO: Container externalname-service ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: verifying the node has the label node kali-worker STEP: verifying the node has the label node kali-worker2 Sep 29 11:27:43.175: INFO: Pod kindnet-pdv4j requesting resource cpu=100m on Node kali-worker Sep 29 11:27:43.175: INFO: Pod kindnet-pgjc7 requesting resource cpu=100m on Node kali-worker2 Sep 29 11:27:43.175: INFO: Pod kube-proxy-qhsmg requesting resource cpu=0m on Node kali-worker2 Sep 29 11:27:43.175: INFO: Pod kube-proxy-qsqz8 requesting resource cpu=0m on Node kali-worker Sep 29 11:27:43.175: INFO: Pod execpodkjhz7 requesting resource cpu=0m on Node kali-worker2 Sep 29 11:27:43.175: INFO: Pod externalname-service-42h7s requesting resource cpu=0m on Node kali-worker Sep 29 11:27:43.175: INFO: Pod externalname-service-8fnkb requesting resource cpu=0m on Node kali-worker2 STEP: Starting Pods to consume most of the cluster CPU. Sep 29 11:27:43.175: INFO: Creating a pod which consumes cpu=11130m on Node kali-worker Sep 29 11:27:43.180: INFO: Creating a pod which consumes cpu=11130m on Node kali-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-546a079a-b6c9-46e4-af73-e8a5620633be.16393d97fc9dcc7d], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-546a079a-b6c9-46e4-af73-e8a5620633be.16393d984a252d3b], Reason = [Started], Message = [Started container filler-pod-546a079a-b6c9-46e4-af73-e8a5620633be] STEP: Considering event: Type = [Normal], Name = [filler-pod-4f42163d-42a8-47d0-b246-6b539b3dbbad.16393d97bbb2b70b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-546a079a-b6c9-46e4-af73-e8a5620633be.16393d983c87e131], Reason = [Created], Message = [Created container filler-pod-546a079a-b6c9-46e4-af73-e8a5620633be] STEP: Considering event: Type = [Normal], Name = [filler-pod-4f42163d-42a8-47d0-b246-6b539b3dbbad.16393d980f650629], Reason = [Created], Message = [Created container filler-pod-4f42163d-42a8-47d0-b246-6b539b3dbbad] STEP: Considering event: Type = [Normal], Name = [filler-pod-546a079a-b6c9-46e4-af73-e8a5620633be.16393d9774a59f96], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3927/filler-pod-546a079a-b6c9-46e4-af73-e8a5620633be to kali-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-4f42163d-42a8-47d0-b246-6b539b3dbbad.16393d9823bafbae], Reason = [Started], Message = [Started container filler-pod-4f42163d-42a8-47d0-b246-6b539b3dbbad] STEP: Considering event: Type = [Normal], Name = [filler-pod-4f42163d-42a8-47d0-b246-6b539b3dbbad.16393d9773080feb], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3927/filler-pod-4f42163d-42a8-47d0-b246-6b539b3dbbad to kali-worker] STEP: Considering event: Type = [Warning], Name = [additional-pod.16393d98db66a6bd], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: Considering event: Type = [Warning], Name = [additional-pod.16393d98e0a139e2], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node kali-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node kali-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:27:50.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3927" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:7.441 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":303,"completed":167,"skipped":2695,"failed":0} SSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:27:50.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-705.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-705.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 29 11:27:57.054: INFO: DNS probes using dns-705/dns-test-fd2a06e1-0244-4a04-abc6-9b3f5f8007f5 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:27:57.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-705" for this suite. • [SLOW TEST:6.681 seconds] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":303,"completed":168,"skipped":2698,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:27:57.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service endpoint-test2 in namespace services-9299 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9299 to expose endpoints map[] Sep 29 11:27:57.738: INFO: successfully validated that service endpoint-test2 in namespace services-9299 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-9299 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9299 to expose endpoints map[pod1:[80]] Sep 29 11:28:01.947: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]], will retry Sep 29 11:28:02.950: INFO: successfully validated that service endpoint-test2 in namespace services-9299 exposes endpoints map[pod1:[80]] STEP: Creating pod pod2 in namespace services-9299 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9299 to expose endpoints map[pod1:[80] pod2:[80]] Sep 29 11:28:06.045: INFO: successfully validated that service endpoint-test2 in namespace services-9299 exposes endpoints map[pod1:[80] pod2:[80]] STEP: Deleting pod pod1 in namespace services-9299 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9299 to expose endpoints map[pod2:[80]] Sep 29 11:28:06.138: INFO: successfully validated that service endpoint-test2 in namespace services-9299 exposes endpoints map[pod2:[80]] STEP: Deleting pod pod2 in namespace services-9299 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9299 to expose endpoints map[] Sep 29 11:28:07.160: INFO: successfully validated that service endpoint-test2 in namespace services-9299 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:28:07.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9299" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:10.089 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":303,"completed":169,"skipped":2722,"failed":0} SSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:28:07.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test env composition Sep 29 11:28:07.334: INFO: Waiting up to 5m0s for pod "var-expansion-e61bc801-c85c-4c8a-816c-71a2b2693284" in namespace "var-expansion-3901" to be "Succeeded or Failed" Sep 29 11:28:07.384: INFO: Pod "var-expansion-e61bc801-c85c-4c8a-816c-71a2b2693284": Phase="Pending", Reason="", readiness=false. Elapsed: 49.265003ms Sep 29 11:28:09.388: INFO: Pod "var-expansion-e61bc801-c85c-4c8a-816c-71a2b2693284": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053624063s Sep 29 11:28:11.391: INFO: Pod "var-expansion-e61bc801-c85c-4c8a-816c-71a2b2693284": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056904678s STEP: Saw pod success Sep 29 11:28:11.391: INFO: Pod "var-expansion-e61bc801-c85c-4c8a-816c-71a2b2693284" satisfied condition "Succeeded or Failed" Sep 29 11:28:11.393: INFO: Trying to get logs from node kali-worker pod var-expansion-e61bc801-c85c-4c8a-816c-71a2b2693284 container dapi-container: STEP: delete the pod Sep 29 11:28:11.465: INFO: Waiting for pod var-expansion-e61bc801-c85c-4c8a-816c-71a2b2693284 to disappear Sep 29 11:28:11.475: INFO: Pod var-expansion-e61bc801-c85c-4c8a-816c-71a2b2693284 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:28:11.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3901" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":303,"completed":170,"skipped":2726,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:28:11.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 29 11:28:11.536: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:28:12.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-895" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":303,"completed":171,"skipped":2773,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:28:12.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-8c260beb-d5a6-48e3-a3d9-91e83b19d845 STEP: Creating secret with name s-test-opt-upd-ada956fc-fb4a-4c1d-b478-628b8d287a4a STEP: Creating the pod STEP: Deleting secret s-test-opt-del-8c260beb-d5a6-48e3-a3d9-91e83b19d845 STEP: Updating secret s-test-opt-upd-ada956fc-fb4a-4c1d-b478-628b8d287a4a STEP: Creating secret with name s-test-opt-create-2223fb34-e9ad-4b01-a399-2e8d7630ac2f STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:29:35.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4842" for this suite. • [SLOW TEST:82.707 seconds] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":172,"skipped":2801,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:29:35.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 29 11:29:35.532: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Sep 29 11:29:35.556: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 11:29:35.572: INFO: Number of nodes with available pods: 0 Sep 29 11:29:35.572: INFO: Node kali-worker is running more than one daemon pod Sep 29 11:29:36.578: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 11:29:36.582: INFO: Number of nodes with available pods: 0 Sep 29 11:29:36.582: INFO: Node kali-worker is running more than one daemon pod Sep 29 11:29:37.951: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 11:29:38.143: INFO: Number of nodes with available pods: 0 Sep 29 11:29:38.143: INFO: Node kali-worker is running more than one daemon pod Sep 29 11:29:38.749: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 11:29:38.753: INFO: Number of nodes with available pods: 0 Sep 29 11:29:38.753: INFO: Node kali-worker is running more than one daemon pod Sep 29 11:29:39.577: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 11:29:39.581: INFO: Number of nodes with available pods: 0 Sep 29 11:29:39.581: INFO: Node kali-worker is running more than one daemon pod Sep 29 11:29:40.578: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 11:29:40.596: INFO: Number of nodes with available pods: 1 Sep 29 11:29:40.596: INFO: Node kali-worker is running more than one daemon pod Sep 29 11:29:41.578: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 11:29:41.581: INFO: Number of nodes with available pods: 2 Sep 29 11:29:41.581: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Sep 29 11:29:41.623: INFO: Wrong image for pod: daemon-set-9n8fw. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 29 11:29:41.623: INFO: Wrong image for pod: daemon-set-rz4kv. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 29 11:29:41.881: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 11:29:42.886: INFO: Wrong image for pod: daemon-set-9n8fw. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 29 11:29:42.887: INFO: Wrong image for pod: daemon-set-rz4kv. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 29 11:29:42.890: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 11:29:43.886: INFO: Wrong image for pod: daemon-set-9n8fw. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 29 11:29:43.887: INFO: Wrong image for pod: daemon-set-rz4kv. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 29 11:29:43.891: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 11:29:44.886: INFO: Wrong image for pod: daemon-set-9n8fw. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 29 11:29:44.886: INFO: Pod daemon-set-9n8fw is not available Sep 29 11:29:44.886: INFO: Wrong image for pod: daemon-set-rz4kv. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 29 11:29:44.891: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 11:29:45.886: INFO: Wrong image for pod: daemon-set-9n8fw. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 29 11:29:45.886: INFO: Pod daemon-set-9n8fw is not available Sep 29 11:29:45.886: INFO: Wrong image for pod: daemon-set-rz4kv. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 29 11:29:45.891: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 11:29:46.886: INFO: Wrong image for pod: daemon-set-9n8fw. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 29 11:29:46.886: INFO: Pod daemon-set-9n8fw is not available Sep 29 11:29:46.886: INFO: Wrong image for pod: daemon-set-rz4kv. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 29 11:29:46.890: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 11:29:47.885: INFO: Wrong image for pod: daemon-set-9n8fw. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 29 11:29:47.885: INFO: Pod daemon-set-9n8fw is not available Sep 29 11:29:47.885: INFO: Wrong image for pod: daemon-set-rz4kv. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 29 11:29:47.890: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 11:29:48.886: INFO: Pod daemon-set-kpz5t is not available Sep 29 11:29:48.886: INFO: Wrong image for pod: daemon-set-rz4kv. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 29 11:29:48.890: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 11:29:49.886: INFO: Pod daemon-set-kpz5t is not available Sep 29 11:29:49.886: INFO: Wrong image for pod: daemon-set-rz4kv. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 29 11:29:49.891: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 11:29:50.886: INFO: Pod daemon-set-kpz5t is not available Sep 29 11:29:50.886: INFO: Wrong image for pod: daemon-set-rz4kv. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 29 11:29:50.890: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 11:29:51.886: INFO: Wrong image for pod: daemon-set-rz4kv. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 29 11:29:51.890: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 11:29:52.886: INFO: Wrong image for pod: daemon-set-rz4kv. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 29 11:29:52.886: INFO: Pod daemon-set-rz4kv is not available Sep 29 11:29:52.890: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 11:29:53.886: INFO: Wrong image for pod: daemon-set-rz4kv. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 29 11:29:53.886: INFO: Pod daemon-set-rz4kv is not available Sep 29 11:29:53.890: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 11:29:54.891: INFO: Wrong image for pod: daemon-set-rz4kv. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 29 11:29:54.891: INFO: Pod daemon-set-rz4kv is not available Sep 29 11:29:54.895: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 11:29:55.886: INFO: Wrong image for pod: daemon-set-rz4kv. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 29 11:29:55.886: INFO: Pod daemon-set-rz4kv is not available Sep 29 11:29:55.891: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 11:29:56.891: INFO: Wrong image for pod: daemon-set-rz4kv. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 29 11:29:56.891: INFO: Pod daemon-set-rz4kv is not available Sep 29 11:29:56.894: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 11:29:57.886: INFO: Wrong image for pod: daemon-set-rz4kv. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Sep 29 11:29:57.886: INFO: Pod daemon-set-rz4kv is not available Sep 29 11:29:57.891: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 11:29:58.886: INFO: Pod daemon-set-5mfgd is not available Sep 29 11:29:58.890: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Sep 29 11:29:58.894: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 11:29:58.898: INFO: Number of nodes with available pods: 1 Sep 29 11:29:58.898: INFO: Node kali-worker is running more than one daemon pod Sep 29 11:29:59.903: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 11:29:59.906: INFO: Number of nodes with available pods: 1 Sep 29 11:29:59.906: INFO: Node kali-worker is running more than one daemon pod Sep 29 11:30:00.917: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 11:30:00.922: INFO: Number of nodes with available pods: 1 Sep 29 11:30:00.922: INFO: Node kali-worker is running more than one daemon pod Sep 29 11:30:01.903: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 11:30:01.907: INFO: Number of nodes with available pods: 2 Sep 29 11:30:01.907: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4732, will wait for the garbage collector to delete the pods Sep 29 11:30:01.982: INFO: Deleting DaemonSet.extensions daemon-set took: 6.753779ms Sep 29 11:30:04.282: INFO: Terminating DaemonSet.extensions daemon-set pods took: 2.300213964s Sep 29 11:30:18.186: INFO: Number of nodes with available pods: 0 Sep 29 11:30:18.186: INFO: Number of running nodes: 0, number of available pods: 0 Sep 29 11:30:18.189: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4732/daemonsets","resourceVersion":"1610777"},"items":null} Sep 29 11:30:18.191: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4732/pods","resourceVersion":"1610777"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:30:18.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4732" for this suite. • [SLOW TEST:42.775 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":303,"completed":173,"skipped":2814,"failed":0} [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:30:18.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Sep 29 11:30:18.702: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Sep 29 11:30:20.714: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736975818, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736975818, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736975818, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736975818, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-85d57b96d6\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 29 11:30:22.719: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736975818, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736975818, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736975818, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736975818, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-85d57b96d6\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 29 11:30:25.754: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 29 11:30:25.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:30:26.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-8378" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:8.908 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":303,"completed":174,"skipped":2814,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:30:27.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 29 11:30:27.263: INFO: Waiting up to 5m0s for pod "downwardapi-volume-109e16c6-2f25-43af-a8fb-d305f6b771ee" in namespace "downward-api-8115" to be "Succeeded or Failed" Sep 29 11:30:27.274: INFO: Pod "downwardapi-volume-109e16c6-2f25-43af-a8fb-d305f6b771ee": Phase="Pending", Reason="", readiness=false. Elapsed: 10.857245ms Sep 29 11:30:29.278: INFO: Pod "downwardapi-volume-109e16c6-2f25-43af-a8fb-d305f6b771ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014971474s Sep 29 11:30:31.282: INFO: Pod "downwardapi-volume-109e16c6-2f25-43af-a8fb-d305f6b771ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018497269s STEP: Saw pod success Sep 29 11:30:31.282: INFO: Pod "downwardapi-volume-109e16c6-2f25-43af-a8fb-d305f6b771ee" satisfied condition "Succeeded or Failed" Sep 29 11:30:31.284: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-109e16c6-2f25-43af-a8fb-d305f6b771ee container client-container: STEP: delete the pod Sep 29 11:30:31.318: INFO: Waiting for pod downwardapi-volume-109e16c6-2f25-43af-a8fb-d305f6b771ee to disappear Sep 29 11:30:31.328: INFO: Pod downwardapi-volume-109e16c6-2f25-43af-a8fb-d305f6b771ee no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:30:31.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8115" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":175,"skipped":2875,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:30:31.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should find a service from listing all namespaces [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching services [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:30:31.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9753" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":303,"completed":176,"skipped":2900,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:30:31.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Sep 29 11:30:31.588: INFO: Waiting up to 5m0s for pod "downward-api-d42d740f-79be-455e-90e8-7bdd269307be" in namespace "downward-api-6552" to be "Succeeded or Failed" Sep 29 11:30:31.592: INFO: Pod "downward-api-d42d740f-79be-455e-90e8-7bdd269307be": Phase="Pending", Reason="", readiness=false. Elapsed: 3.479279ms Sep 29 11:30:33.596: INFO: Pod "downward-api-d42d740f-79be-455e-90e8-7bdd269307be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007838351s Sep 29 11:30:35.601: INFO: Pod "downward-api-d42d740f-79be-455e-90e8-7bdd269307be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012439723s STEP: Saw pod success Sep 29 11:30:35.601: INFO: Pod "downward-api-d42d740f-79be-455e-90e8-7bdd269307be" satisfied condition "Succeeded or Failed" Sep 29 11:30:35.604: INFO: Trying to get logs from node kali-worker pod downward-api-d42d740f-79be-455e-90e8-7bdd269307be container dapi-container: STEP: delete the pod Sep 29 11:30:35.670: INFO: Waiting for pod downward-api-d42d740f-79be-455e-90e8-7bdd269307be to disappear Sep 29 11:30:35.712: INFO: Pod downward-api-d42d740f-79be-455e-90e8-7bdd269307be no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:30:35.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6552" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":303,"completed":177,"skipped":2912,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:30:35.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Sep 29 11:30:35.781: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Sep 29 11:30:35.799: INFO: Waiting for terminating namespaces to be deleted... Sep 29 11:30:35.813: INFO: Logging pods the apiserver thinks is on node kali-worker before test Sep 29 11:30:35.817: INFO: kindnet-pdv4j from kube-system started at 2020-09-23 08:29:08 +0000 UTC (1 container statuses recorded) Sep 29 11:30:35.817: INFO: Container kindnet-cni ready: true, restart count 0 Sep 29 11:30:35.817: INFO: kube-proxy-qsqz8 from kube-system started at 2020-09-23 08:29:08 +0000 UTC (1 container statuses recorded) Sep 29 11:30:35.817: INFO: Container kube-proxy ready: true, restart count 0 Sep 29 11:30:35.817: INFO: Logging pods the apiserver thinks is on node kali-worker2 before test Sep 29 11:30:35.821: INFO: kindnet-pgjc7 from kube-system started at 2020-09-23 08:29:08 +0000 UTC (1 container statuses recorded) Sep 29 11:30:35.821: INFO: Container kindnet-cni ready: true, restart count 0 Sep 29 11:30:35.821: INFO: kube-proxy-qhsmg from kube-system started at 2020-09-23 08:29:08 +0000 UTC (1 container statuses recorded) Sep 29 11:30:35.821: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-b3441030-a416-46d9-8315-8c950822fa7f 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-b3441030-a416-46d9-8315-8c950822fa7f off the node kali-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-b3441030-a416-46d9-8315-8c950822fa7f [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:35:44.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1388" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:308.372 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":303,"completed":178,"skipped":2924,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:35:44.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 29 11:35:44.158: INFO: Waiting up to 5m0s for pod "busybox-user-65534-0f9af955-eed5-4570-b10d-907d18ee5e4d" in namespace "security-context-test-9170" to be "Succeeded or Failed" Sep 29 11:35:44.208: INFO: Pod "busybox-user-65534-0f9af955-eed5-4570-b10d-907d18ee5e4d": Phase="Pending", Reason="", readiness=false. Elapsed: 50.335901ms Sep 29 11:35:46.213: INFO: Pod "busybox-user-65534-0f9af955-eed5-4570-b10d-907d18ee5e4d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055343637s Sep 29 11:35:48.218: INFO: Pod "busybox-user-65534-0f9af955-eed5-4570-b10d-907d18ee5e4d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.059734305s Sep 29 11:35:48.218: INFO: Pod "busybox-user-65534-0f9af955-eed5-4570-b10d-907d18ee5e4d" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:35:48.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9170" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":179,"skipped":2937,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:35:48.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should provide secure master service [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:35:48.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6838" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":303,"completed":180,"skipped":2946,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:35:48.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium Sep 29 11:35:48.380: INFO: Waiting up to 5m0s for pod "pod-73045d14-da69-498b-955d-3052f06432f9" in namespace "emptydir-7058" to be "Succeeded or Failed" Sep 29 11:35:48.395: INFO: Pod "pod-73045d14-da69-498b-955d-3052f06432f9": Phase="Pending", Reason="", readiness=false. Elapsed: 14.464803ms Sep 29 11:35:50.777: INFO: Pod "pod-73045d14-da69-498b-955d-3052f06432f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.397228807s Sep 29 11:35:52.782: INFO: Pod "pod-73045d14-da69-498b-955d-3052f06432f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.401901164s STEP: Saw pod success Sep 29 11:35:52.782: INFO: Pod "pod-73045d14-da69-498b-955d-3052f06432f9" satisfied condition "Succeeded or Failed" Sep 29 11:35:52.786: INFO: Trying to get logs from node kali-worker pod pod-73045d14-da69-498b-955d-3052f06432f9 container test-container: STEP: delete the pod Sep 29 11:35:52.867: INFO: Waiting for pod pod-73045d14-da69-498b-955d-3052f06432f9 to disappear Sep 29 11:35:52.879: INFO: Pod pod-73045d14-da69-498b-955d-3052f06432f9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:35:52.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7058" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":181,"skipped":2973,"failed":0} SS ------------------------------ [sig-api-machinery] server version should find the server version [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] server version /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:35:52.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename server-version STEP: Waiting for a default service account to be provisioned in namespace [It] should find the server version [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Request ServerVersion STEP: Confirm major version Sep 29 11:35:52.956: INFO: Major version: 1 STEP: Confirm minor version Sep 29 11:35:52.956: INFO: cleanMinorVersion: 19 Sep 29 11:35:52.956: INFO: Minor version: 19 [AfterEach] [sig-api-machinery] server version /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:35:52.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "server-version-7791" for this suite. •{"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":303,"completed":182,"skipped":2975,"failed":0} SS ------------------------------ [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:35:52.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-29 Sep 29 11:35:57.081: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-29 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Sep 29 11:35:57.343: INFO: stderr: "I0929 11:35:57.236295 1899 log.go:181] (0xc0009b5550) (0xc000a26aa0) Create stream\nI0929 11:35:57.236346 1899 log.go:181] (0xc0009b5550) (0xc000a26aa0) Stream added, broadcasting: 1\nI0929 11:35:57.240254 1899 log.go:181] (0xc0009b5550) Reply frame received for 1\nI0929 11:35:57.240293 1899 log.go:181] (0xc0009b5550) (0xc000211040) Create stream\nI0929 11:35:57.240303 1899 log.go:181] (0xc0009b5550) (0xc000211040) Stream added, broadcasting: 3\nI0929 11:35:57.241088 1899 log.go:181] (0xc0009b5550) Reply frame received for 3\nI0929 11:35:57.241129 1899 log.go:181] (0xc0009b5550) (0xc000a26000) Create stream\nI0929 11:35:57.241140 1899 log.go:181] (0xc0009b5550) (0xc000a26000) Stream added, broadcasting: 5\nI0929 11:35:57.241897 1899 log.go:181] (0xc0009b5550) Reply frame received for 5\nI0929 11:35:57.329731 1899 log.go:181] (0xc0009b5550) Data frame received for 5\nI0929 11:35:57.329758 1899 log.go:181] (0xc000a26000) (5) Data frame handling\nI0929 11:35:57.329774 1899 log.go:181] (0xc000a26000) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0929 11:35:57.334175 1899 log.go:181] (0xc0009b5550) Data frame received for 3\nI0929 11:35:57.334258 1899 log.go:181] (0xc000211040) (3) Data frame handling\nI0929 11:35:57.334289 1899 log.go:181] (0xc000211040) (3) Data frame sent\nI0929 11:35:57.334688 1899 log.go:181] (0xc0009b5550) Data frame received for 5\nI0929 11:35:57.334726 1899 log.go:181] (0xc000a26000) (5) Data frame handling\nI0929 11:35:57.334877 1899 log.go:181] (0xc0009b5550) Data frame received for 3\nI0929 11:35:57.334900 1899 log.go:181] (0xc000211040) (3) Data frame handling\nI0929 11:35:57.338402 1899 log.go:181] (0xc0009b5550) Data frame received for 1\nI0929 11:35:57.338425 1899 log.go:181] (0xc000a26aa0) (1) Data frame handling\nI0929 11:35:57.338436 1899 log.go:181] (0xc000a26aa0) (1) Data frame sent\nI0929 11:35:57.338450 1899 log.go:181] (0xc0009b5550) (0xc000a26aa0) Stream removed, broadcasting: 1\nI0929 11:35:57.338501 1899 log.go:181] (0xc0009b5550) Go away received\nI0929 11:35:57.338822 1899 log.go:181] (0xc0009b5550) (0xc000a26aa0) Stream removed, broadcasting: 1\nI0929 11:35:57.338840 1899 log.go:181] (0xc0009b5550) (0xc000211040) Stream removed, broadcasting: 3\nI0929 11:35:57.338848 1899 log.go:181] (0xc0009b5550) (0xc000a26000) Stream removed, broadcasting: 5\n" Sep 29 11:35:57.343: INFO: stdout: "iptables" Sep 29 11:35:57.343: INFO: proxyMode: iptables Sep 29 11:35:57.348: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 29 11:35:57.373: INFO: Pod kube-proxy-mode-detector still exists Sep 29 11:35:59.374: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 29 11:35:59.379: INFO: Pod kube-proxy-mode-detector still exists Sep 29 11:36:01.374: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 29 11:36:01.378: INFO: Pod kube-proxy-mode-detector still exists Sep 29 11:36:03.374: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 29 11:36:03.379: INFO: Pod kube-proxy-mode-detector still exists Sep 29 11:36:05.374: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 29 11:36:05.379: INFO: Pod kube-proxy-mode-detector still exists Sep 29 11:36:07.374: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 29 11:36:07.379: INFO: Pod kube-proxy-mode-detector still exists Sep 29 11:36:09.374: INFO: Waiting for pod kube-proxy-mode-detector to disappear Sep 29 11:36:09.378: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-29 STEP: creating replication controller affinity-nodeport-timeout in namespace services-29 I0929 11:36:09.449682 7 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-29, replica count: 3 I0929 11:36:12.500139 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0929 11:36:15.500421 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 29 11:36:15.512: INFO: Creating new exec pod Sep 29 11:36:20.542: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-29 execpod-affinityt4n2q -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80' Sep 29 11:36:23.593: INFO: stderr: "I0929 11:36:23.491553 1917 log.go:181] (0xc0002340b0) (0xc000ba4000) Create stream\nI0929 11:36:23.491613 1917 log.go:181] (0xc0002340b0) (0xc000ba4000) Stream added, broadcasting: 1\nI0929 11:36:23.493714 1917 log.go:181] (0xc0002340b0) Reply frame received for 1\nI0929 11:36:23.493795 1917 log.go:181] (0xc0002340b0) (0xc000ba40a0) Create stream\nI0929 11:36:23.493837 1917 log.go:181] (0xc0002340b0) (0xc000ba40a0) Stream added, broadcasting: 3\nI0929 11:36:23.494898 1917 log.go:181] (0xc0002340b0) Reply frame received for 3\nI0929 11:36:23.494936 1917 log.go:181] (0xc0002340b0) (0xc000ba4140) Create stream\nI0929 11:36:23.494948 1917 log.go:181] (0xc0002340b0) (0xc000ba4140) Stream added, broadcasting: 5\nI0929 11:36:23.495714 1917 log.go:181] (0xc0002340b0) Reply frame received for 5\nI0929 11:36:23.585263 1917 log.go:181] (0xc0002340b0) Data frame received for 3\nI0929 11:36:23.585314 1917 log.go:181] (0xc000ba40a0) (3) Data frame handling\nI0929 11:36:23.585352 1917 log.go:181] (0xc0002340b0) Data frame received for 5\nI0929 11:36:23.585370 1917 log.go:181] (0xc000ba4140) (5) Data frame handling\nI0929 11:36:23.585397 1917 log.go:181] (0xc000ba4140) (5) Data frame sent\nI0929 11:36:23.585432 1917 log.go:181] (0xc0002340b0) Data frame received for 5\nI0929 11:36:23.585468 1917 log.go:181] (0xc000ba4140) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\nI0929 11:36:23.587766 1917 log.go:181] (0xc0002340b0) Data frame received for 1\nI0929 11:36:23.587792 1917 log.go:181] (0xc000ba4000) (1) Data frame handling\nI0929 11:36:23.587819 1917 log.go:181] (0xc000ba4000) (1) Data frame sent\nI0929 11:36:23.587845 1917 log.go:181] (0xc0002340b0) (0xc000ba4000) Stream removed, broadcasting: 1\nI0929 11:36:23.587869 1917 log.go:181] (0xc0002340b0) Go away received\nI0929 11:36:23.588318 1917 log.go:181] (0xc0002340b0) (0xc000ba4000) Stream removed, broadcasting: 1\nI0929 11:36:23.588344 1917 log.go:181] (0xc0002340b0) (0xc000ba40a0) Stream removed, broadcasting: 3\nI0929 11:36:23.588355 1917 log.go:181] (0xc0002340b0) (0xc000ba4140) Stream removed, broadcasting: 5\n" Sep 29 11:36:23.593: INFO: stdout: "" Sep 29 11:36:23.594: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-29 execpod-affinityt4n2q -- /bin/sh -x -c nc -zv -t -w 2 10.102.79.25 80' Sep 29 11:36:23.804: INFO: stderr: "I0929 11:36:23.732330 1935 log.go:181] (0xc0008b1550) (0xc0008a8960) Create stream\nI0929 11:36:23.732415 1935 log.go:181] (0xc0008b1550) (0xc0008a8960) Stream added, broadcasting: 1\nI0929 11:36:23.737290 1935 log.go:181] (0xc0008b1550) Reply frame received for 1\nI0929 11:36:23.737325 1935 log.go:181] (0xc0008b1550) (0xc000630000) Create stream\nI0929 11:36:23.737336 1935 log.go:181] (0xc0008b1550) (0xc000630000) Stream added, broadcasting: 3\nI0929 11:36:23.738404 1935 log.go:181] (0xc0008b1550) Reply frame received for 3\nI0929 11:36:23.738442 1935 log.go:181] (0xc0008b1550) (0xc0008a8000) Create stream\nI0929 11:36:23.738452 1935 log.go:181] (0xc0008b1550) (0xc0008a8000) Stream added, broadcasting: 5\nI0929 11:36:23.739308 1935 log.go:181] (0xc0008b1550) Reply frame received for 5\nI0929 11:36:23.797890 1935 log.go:181] (0xc0008b1550) Data frame received for 3\nI0929 11:36:23.797932 1935 log.go:181] (0xc000630000) (3) Data frame handling\nI0929 11:36:23.797956 1935 log.go:181] (0xc0008b1550) Data frame received for 5\nI0929 11:36:23.797966 1935 log.go:181] (0xc0008a8000) (5) Data frame handling\nI0929 11:36:23.797977 1935 log.go:181] (0xc0008a8000) (5) Data frame sent\nI0929 11:36:23.797989 1935 log.go:181] (0xc0008b1550) Data frame received for 5\nI0929 11:36:23.797997 1935 log.go:181] (0xc0008a8000) (5) Data frame handling\n+ nc -zv -t -w 2 10.102.79.25 80\nConnection to 10.102.79.25 80 port [tcp/http] succeeded!\nI0929 11:36:23.799074 1935 log.go:181] (0xc0008b1550) Data frame received for 1\nI0929 11:36:23.799095 1935 log.go:181] (0xc0008a8960) (1) Data frame handling\nI0929 11:36:23.799111 1935 log.go:181] (0xc0008a8960) (1) Data frame sent\nI0929 11:36:23.799130 1935 log.go:181] (0xc0008b1550) (0xc0008a8960) Stream removed, broadcasting: 1\nI0929 11:36:23.799195 1935 log.go:181] (0xc0008b1550) Go away received\nI0929 11:36:23.799580 1935 log.go:181] (0xc0008b1550) (0xc0008a8960) Stream removed, broadcasting: 1\nI0929 11:36:23.799606 1935 log.go:181] (0xc0008b1550) (0xc000630000) Stream removed, broadcasting: 3\nI0929 11:36:23.799619 1935 log.go:181] (0xc0008b1550) (0xc0008a8000) Stream removed, broadcasting: 5\n" Sep 29 11:36:23.804: INFO: stdout: "" Sep 29 11:36:23.804: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-29 execpod-affinityt4n2q -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.12 30476' Sep 29 11:36:24.034: INFO: stderr: "I0929 11:36:23.950427 1953 log.go:181] (0xc000dc2000) (0xc00038c1e0) Create stream\nI0929 11:36:23.950533 1953 log.go:181] (0xc000dc2000) (0xc00038c1e0) Stream added, broadcasting: 1\nI0929 11:36:23.953649 1953 log.go:181] (0xc000dc2000) Reply frame received for 1\nI0929 11:36:23.953721 1953 log.go:181] (0xc000dc2000) (0xc00019da40) Create stream\nI0929 11:36:23.953741 1953 log.go:181] (0xc000dc2000) (0xc00019da40) Stream added, broadcasting: 3\nI0929 11:36:23.954789 1953 log.go:181] (0xc000dc2000) Reply frame received for 3\nI0929 11:36:23.954827 1953 log.go:181] (0xc000dc2000) (0xc0003763c0) Create stream\nI0929 11:36:23.954837 1953 log.go:181] (0xc000dc2000) (0xc0003763c0) Stream added, broadcasting: 5\nI0929 11:36:23.955820 1953 log.go:181] (0xc000dc2000) Reply frame received for 5\nI0929 11:36:24.026636 1953 log.go:181] (0xc000dc2000) Data frame received for 3\nI0929 11:36:24.026682 1953 log.go:181] (0xc00019da40) (3) Data frame handling\nI0929 11:36:24.026710 1953 log.go:181] (0xc000dc2000) Data frame received for 5\nI0929 11:36:24.026721 1953 log.go:181] (0xc0003763c0) (5) Data frame handling\nI0929 11:36:24.026740 1953 log.go:181] (0xc0003763c0) (5) Data frame sent\nI0929 11:36:24.026751 1953 log.go:181] (0xc000dc2000) Data frame received for 5\nI0929 11:36:24.026762 1953 log.go:181] (0xc0003763c0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.12 30476\nConnection to 172.18.0.12 30476 port [tcp/30476] succeeded!\nI0929 11:36:24.028438 1953 log.go:181] (0xc000dc2000) Data frame received for 1\nI0929 11:36:24.028459 1953 log.go:181] (0xc00038c1e0) (1) Data frame handling\nI0929 11:36:24.028473 1953 log.go:181] (0xc00038c1e0) (1) Data frame sent\nI0929 11:36:24.028614 1953 log.go:181] (0xc000dc2000) (0xc00038c1e0) Stream removed, broadcasting: 1\nI0929 11:36:24.028657 1953 log.go:181] (0xc000dc2000) Go away received\nI0929 11:36:24.029274 1953 log.go:181] (0xc000dc2000) (0xc00038c1e0) Stream removed, broadcasting: 1\nI0929 11:36:24.029302 1953 log.go:181] (0xc000dc2000) (0xc00019da40) Stream removed, broadcasting: 3\nI0929 11:36:24.029315 1953 log.go:181] (0xc000dc2000) (0xc0003763c0) Stream removed, broadcasting: 5\n" Sep 29 11:36:24.034: INFO: stdout: "" Sep 29 11:36:24.034: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-29 execpod-affinityt4n2q -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.13 30476' Sep 29 11:36:24.347: INFO: stderr: "I0929 11:36:24.266118 1971 log.go:181] (0xc000c3ef20) (0xc000c3a500) Create stream\nI0929 11:36:24.266201 1971 log.go:181] (0xc000c3ef20) (0xc000c3a500) Stream added, broadcasting: 1\nI0929 11:36:24.271111 1971 log.go:181] (0xc000c3ef20) Reply frame received for 1\nI0929 11:36:24.271162 1971 log.go:181] (0xc000c3ef20) (0xc000540000) Create stream\nI0929 11:36:24.271188 1971 log.go:181] (0xc000c3ef20) (0xc000540000) Stream added, broadcasting: 3\nI0929 11:36:24.272399 1971 log.go:181] (0xc000c3ef20) Reply frame received for 3\nI0929 11:36:24.272468 1971 log.go:181] (0xc000c3ef20) (0xc0005403c0) Create stream\nI0929 11:36:24.272500 1971 log.go:181] (0xc000c3ef20) (0xc0005403c0) Stream added, broadcasting: 5\nI0929 11:36:24.273873 1971 log.go:181] (0xc000c3ef20) Reply frame received for 5\nI0929 11:36:24.340933 1971 log.go:181] (0xc000c3ef20) Data frame received for 5\nI0929 11:36:24.340975 1971 log.go:181] (0xc0005403c0) (5) Data frame handling\nI0929 11:36:24.340993 1971 log.go:181] (0xc0005403c0) (5) Data frame sent\nI0929 11:36:24.341005 1971 log.go:181] (0xc000c3ef20) Data frame received for 5\n+ nc -zv -t -w 2 172.18.0.13 30476\nConnection to 172.18.0.13 30476 port [tcp/30476] succeeded!\nI0929 11:36:24.341013 1971 log.go:181] (0xc0005403c0) (5) Data frame handling\nI0929 11:36:24.341055 1971 log.go:181] (0xc000c3ef20) Data frame received for 3\nI0929 11:36:24.341071 1971 log.go:181] (0xc000540000) (3) Data frame handling\nI0929 11:36:24.342368 1971 log.go:181] (0xc000c3ef20) Data frame received for 1\nI0929 11:36:24.342405 1971 log.go:181] (0xc000c3a500) (1) Data frame handling\nI0929 11:36:24.342425 1971 log.go:181] (0xc000c3a500) (1) Data frame sent\nI0929 11:36:24.342439 1971 log.go:181] (0xc000c3ef20) (0xc000c3a500) Stream removed, broadcasting: 1\nI0929 11:36:24.342464 1971 log.go:181] (0xc000c3ef20) Go away received\nI0929 11:36:24.342842 1971 log.go:181] (0xc000c3ef20) (0xc000c3a500) Stream removed, broadcasting: 1\nI0929 11:36:24.342862 1971 log.go:181] (0xc000c3ef20) (0xc000540000) Stream removed, broadcasting: 3\nI0929 11:36:24.342874 1971 log.go:181] (0xc000c3ef20) (0xc0005403c0) Stream removed, broadcasting: 5\n" Sep 29 11:36:24.347: INFO: stdout: "" Sep 29 11:36:24.347: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-29 execpod-affinityt4n2q -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.12:30476/ ; done' Sep 29 11:36:24.710: INFO: stderr: "I0929 11:36:24.542610 1989 log.go:181] (0xc0000980b0) (0xc0009a8140) Create stream\nI0929 11:36:24.542666 1989 log.go:181] (0xc0000980b0) (0xc0009a8140) Stream added, broadcasting: 1\nI0929 11:36:24.544523 1989 log.go:181] (0xc0000980b0) Reply frame received for 1\nI0929 11:36:24.544566 1989 log.go:181] (0xc0000980b0) (0xc000a2c000) Create stream\nI0929 11:36:24.544577 1989 log.go:181] (0xc0000980b0) (0xc000a2c000) Stream added, broadcasting: 3\nI0929 11:36:24.545819 1989 log.go:181] (0xc0000980b0) Reply frame received for 3\nI0929 11:36:24.545862 1989 log.go:181] (0xc0000980b0) (0xc000a2c0a0) Create stream\nI0929 11:36:24.545879 1989 log.go:181] (0xc0000980b0) (0xc000a2c0a0) Stream added, broadcasting: 5\nI0929 11:36:24.546998 1989 log.go:181] (0xc0000980b0) Reply frame received for 5\nI0929 11:36:24.618410 1989 log.go:181] (0xc0000980b0) Data frame received for 5\nI0929 11:36:24.618442 1989 log.go:181] (0xc000a2c0a0) (5) Data frame handling\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:30476/\nI0929 11:36:24.618486 1989 log.go:181] (0xc0000980b0) Data frame received for 3\nI0929 11:36:24.618530 1989 log.go:181] (0xc000a2c000) (3) Data frame handling\nI0929 11:36:24.618549 1989 log.go:181] (0xc000a2c000) (3) Data frame sent\nI0929 11:36:24.618597 1989 log.go:181] (0xc000a2c0a0) (5) Data frame sent\nI0929 11:36:24.621551 1989 log.go:181] (0xc0000980b0) Data frame received for 3\nI0929 11:36:24.621570 1989 log.go:181] (0xc000a2c000) (3) Data frame handling\nI0929 11:36:24.621584 1989 log.go:181] (0xc000a2c000) (3) Data frame sent\nI0929 11:36:24.622206 1989 log.go:181] (0xc0000980b0) Data frame received for 5\nI0929 11:36:24.622233 1989 log.go:181] (0xc000a2c0a0) (5) Data frame handling\nI0929 11:36:24.622240 1989 log.go:181] (0xc000a2c0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:30476/\nI0929 11:36:24.622250 1989 log.go:181] (0xc0000980b0) Data frame received for 3\nI0929 11:36:24.622255 1989 log.go:181] (0xc000a2c000) (3) Data frame handling\nI0929 11:36:24.622260 1989 log.go:181] (0xc000a2c000) (3) Data frame sent\nI0929 11:36:24.628692 1989 log.go:181] (0xc0000980b0) Data frame received for 3\nI0929 11:36:24.628718 1989 log.go:181] (0xc000a2c000) (3) Data frame handling\nI0929 11:36:24.628737 1989 log.go:181] (0xc000a2c000) (3) Data frame sent\nI0929 11:36:24.629099 1989 log.go:181] (0xc0000980b0) Data frame received for 5\nI0929 11:36:24.629135 1989 log.go:181] (0xc000a2c0a0) (5) Data frame handling\nI0929 11:36:24.629148 1989 log.go:181] (0xc000a2c0a0) (5) Data frame sent\nI0929 11:36:24.629162 1989 log.go:181] (0xc0000980b0) Data frame received for 5\n+ echo\n+ curl -q -s --connect-timeout 2I0929 11:36:24.629172 1989 log.go:181] (0xc000a2c0a0) (5) Data frame handling\nI0929 11:36:24.629206 1989 log.go:181] (0xc000a2c0a0) (5) Data frame sent\n http://172.18.0.12:30476/\nI0929 11:36:24.629224 1989 log.go:181] (0xc0000980b0) Data frame received for 3\nI0929 11:36:24.629243 1989 log.go:181] (0xc000a2c000) (3) Data frame handling\nI0929 11:36:24.629260 1989 log.go:181] (0xc000a2c000) (3) Data frame sent\nI0929 11:36:24.634851 1989 log.go:181] (0xc0000980b0) Data frame received for 3\nI0929 11:36:24.634866 1989 log.go:181] (0xc000a2c000) (3) Data frame handling\nI0929 11:36:24.634879 1989 log.go:181] (0xc000a2c000) (3) Data frame sent\nI0929 11:36:24.635464 1989 log.go:181] (0xc0000980b0) Data frame received for 3\nI0929 11:36:24.635486 1989 log.go:181] (0xc0000980b0) Data frame received for 5\nI0929 11:36:24.635509 1989 log.go:181] (0xc000a2c0a0) (5) Data frame handling\nI0929 11:36:24.635519 1989 log.go:181] (0xc000a2c0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:30476/\nI0929 11:36:24.635537 1989 log.go:181] (0xc000a2c000) (3) Data frame handling\nI0929 11:36:24.635553 1989 log.go:181] (0xc000a2c000) (3) Data frame sent\nI0929 11:36:24.638920 1989 log.go:181] (0xc0000980b0) Data frame received for 3\nI0929 11:36:24.638934 1989 log.go:181] (0xc000a2c000) (3) Data frame handling\nI0929 11:36:24.638946 1989 log.go:181] (0xc000a2c000) (3) Data frame sent\nI0929 11:36:24.639547 1989 log.go:181] (0xc0000980b0) Data frame received for 5\nI0929 11:36:24.639578 1989 log.go:181] (0xc000a2c0a0) (5) Data frame handling\nI0929 11:36:24.639590 1989 log.go:181] (0xc000a2c0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:30476/\nI0929 11:36:24.639630 1989 log.go:181] (0xc0000980b0) Data frame received for 3\nI0929 11:36:24.639664 1989 log.go:181] (0xc000a2c000) (3) Data frame handling\nI0929 11:36:24.639682 1989 log.go:181] (0xc000a2c000) (3) Data frame sent\nI0929 11:36:24.645893 1989 log.go:181] (0xc0000980b0) Data frame received for 3\nI0929 11:36:24.645910 1989 log.go:181] (0xc000a2c000) (3) Data frame handling\nI0929 11:36:24.645923 1989 log.go:181] (0xc000a2c000) (3) Data frame sent\nI0929 11:36:24.646476 1989 log.go:181] (0xc0000980b0) Data frame received for 5\nI0929 11:36:24.646495 1989 log.go:181] (0xc000a2c0a0) (5) Data frame handling\nI0929 11:36:24.646528 1989 log.go:181] (0xc000a2c0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:30476/\nI0929 11:36:24.646716 1989 log.go:181] (0xc0000980b0) Data frame received for 3\nI0929 11:36:24.646734 1989 log.go:181] (0xc000a2c000) (3) Data frame handling\nI0929 11:36:24.646747 1989 log.go:181] (0xc000a2c000) (3) Data frame sent\nI0929 11:36:24.652023 1989 log.go:181] (0xc0000980b0) Data frame received for 3\nI0929 11:36:24.652038 1989 log.go:181] (0xc000a2c000) (3) Data frame handling\nI0929 11:36:24.652055 1989 log.go:181] (0xc000a2c000) (3) Data frame sent\nI0929 11:36:24.652725 1989 log.go:181] (0xc0000980b0) Data frame received for 5\nI0929 11:36:24.652741 1989 log.go:181] (0xc000a2c0a0) (5) Data frame handling\nI0929 11:36:24.652759 1989 log.go:181] (0xc000a2c0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:30476/\nI0929 11:36:24.652773 1989 log.go:181] (0xc0000980b0) Data frame received for 3\nI0929 11:36:24.652783 1989 log.go:181] (0xc000a2c000) (3) Data frame handling\nI0929 11:36:24.652790 1989 log.go:181] (0xc000a2c000) (3) Data frame sent\nI0929 11:36:24.656580 1989 log.go:181] (0xc0000980b0) Data frame received for 3\nI0929 11:36:24.656605 1989 log.go:181] (0xc000a2c000) (3) Data frame handling\nI0929 11:36:24.656636 1989 log.go:181] (0xc000a2c000) (3) Data frame sent\nI0929 11:36:24.657053 1989 log.go:181] (0xc0000980b0) Data frame received for 5\nI0929 11:36:24.657067 1989 log.go:181] (0xc000a2c0a0) (5) Data frame handling\nI0929 11:36:24.657076 1989 log.go:181] (0xc000a2c0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:30476/\nI0929 11:36:24.657086 1989 log.go:181] (0xc0000980b0) Data frame received for 3\nI0929 11:36:24.657118 1989 log.go:181] (0xc000a2c000) (3) Data frame handling\nI0929 11:36:24.657142 1989 log.go:181] (0xc000a2c000) (3) Data frame sent\nI0929 11:36:24.661795 1989 log.go:181] (0xc0000980b0) Data frame received for 3\nI0929 11:36:24.661817 1989 log.go:181] (0xc000a2c000) (3) Data frame handling\nI0929 11:36:24.661834 1989 log.go:181] (0xc000a2c000) (3) Data frame sent\nI0929 11:36:24.662476 1989 log.go:181] (0xc0000980b0) Data frame received for 3\nI0929 11:36:24.662495 1989 log.go:181] (0xc000a2c000) (3) Data frame handling\nI0929 11:36:24.662511 1989 log.go:181] (0xc000a2c000) (3) Data frame sent\nI0929 11:36:24.662528 1989 log.go:181] (0xc0000980b0) Data frame received for 5\nI0929 11:36:24.662552 1989 log.go:181] (0xc000a2c0a0) (5) Data frame handling\nI0929 11:36:24.662572 1989 log.go:181] (0xc000a2c0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:30476/\nI0929 11:36:24.667092 1989 log.go:181] (0xc0000980b0) Data frame received for 3\nI0929 11:36:24.667125 1989 log.go:181] (0xc000a2c000) (3) Data frame handling\nI0929 11:36:24.667144 1989 log.go:181] (0xc000a2c000) (3) Data frame sent\nI0929 11:36:24.667699 1989 log.go:181] (0xc0000980b0) Data frame received for 5\nI0929 11:36:24.667730 1989 log.go:181] (0xc000a2c0a0) (5) Data frame handling\nI0929 11:36:24.667740 1989 log.go:181] (0xc000a2c0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:30476/\nI0929 11:36:24.667755 1989 log.go:181] (0xc0000980b0) Data frame received for 3\nI0929 11:36:24.667763 1989 log.go:181] (0xc000a2c000) (3) Data frame handling\nI0929 11:36:24.667772 1989 log.go:181] (0xc000a2c000) (3) Data frame sent\nI0929 11:36:24.672658 1989 log.go:181] (0xc0000980b0) Data frame received for 3\nI0929 11:36:24.672675 1989 log.go:181] (0xc000a2c000) (3) Data frame handling\nI0929 11:36:24.672694 1989 log.go:181] (0xc000a2c000) (3) Data frame sent\nI0929 11:36:24.673253 1989 log.go:181] (0xc0000980b0) Data frame received for 3\nI0929 11:36:24.673272 1989 log.go:181] (0xc000a2c000) (3) Data frame handling\nI0929 11:36:24.673279 1989 log.go:181] (0xc000a2c000) (3) Data frame sent\nI0929 11:36:24.673301 1989 log.go:181] (0xc0000980b0) Data frame received for 5\nI0929 11:36:24.673325 1989 log.go:181] (0xc000a2c0a0) (5) Data frame handling\nI0929 11:36:24.673345 1989 log.go:181] (0xc000a2c0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:30476/\nI0929 11:36:24.677342 1989 log.go:181] (0xc0000980b0) Data frame received for 3\nI0929 11:36:24.677358 1989 log.go:181] (0xc000a2c000) (3) Data frame handling\nI0929 11:36:24.677372 1989 log.go:181] (0xc000a2c000) (3) Data frame sent\nI0929 11:36:24.677996 1989 log.go:181] (0xc0000980b0) Data frame received for 3\nI0929 11:36:24.678014 1989 log.go:181] (0xc000a2c000) (3) Data frame handling\nI0929 11:36:24.678030 1989 log.go:181] (0xc000a2c000) (3) Data frame sent\nI0929 11:36:24.678046 1989 log.go:181] (0xc0000980b0) Data frame received for 5\nI0929 11:36:24.678059 1989 log.go:181] (0xc000a2c0a0) (5) Data frame handling\nI0929 11:36:24.678071 1989 log.go:181] (0xc000a2c0a0) (5) Data frame sent\nI0929 11:36:24.678087 1989 log.go:181] (0xc0000980b0) Data frame received for 5\nI0929 11:36:24.678097 1989 log.go:181] (0xc000a2c0a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:30476/\nI0929 11:36:24.678129 1989 log.go:181] (0xc000a2c0a0) (5) Data frame sent\nI0929 11:36:24.684494 1989 log.go:181] (0xc0000980b0) Data frame received for 3\nI0929 11:36:24.684513 1989 log.go:181] (0xc000a2c000) (3) Data frame handling\nI0929 11:36:24.684527 1989 log.go:181] (0xc000a2c000) (3) Data frame sent\nI0929 11:36:24.685117 1989 log.go:181] (0xc0000980b0) Data frame received for 3\nI0929 11:36:24.685158 1989 log.go:181] (0xc000a2c000) (3) Data frame handling\nI0929 11:36:24.685177 1989 log.go:181] (0xc000a2c000) (3) Data frame sent\nI0929 11:36:24.685200 1989 log.go:181] (0xc0000980b0) Data frame received for 5\nI0929 11:36:24.685215 1989 log.go:181] (0xc000a2c0a0) (5) Data frame handling\nI0929 11:36:24.685235 1989 log.go:181] (0xc000a2c0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:30476/\nI0929 11:36:24.690023 1989 log.go:181] (0xc0000980b0) Data frame received for 3\nI0929 11:36:24.690037 1989 log.go:181] (0xc000a2c000) (3) Data frame handling\nI0929 11:36:24.690045 1989 log.go:181] (0xc000a2c000) (3) Data frame sent\nI0929 11:36:24.690473 1989 log.go:181] (0xc0000980b0) Data frame received for 3\nI0929 11:36:24.690489 1989 log.go:181] (0xc000a2c000) (3) Data frame handling\nI0929 11:36:24.690498 1989 log.go:181] (0xc000a2c000) (3) Data frame sent\nI0929 11:36:24.690512 1989 log.go:181] (0xc0000980b0) Data frame received for 5\nI0929 11:36:24.690530 1989 log.go:181] (0xc000a2c0a0) (5) Data frame handling\nI0929 11:36:24.690550 1989 log.go:181] (0xc000a2c0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:30476/\nI0929 11:36:24.695001 1989 log.go:181] (0xc0000980b0) Data frame received for 3\nI0929 11:36:24.695012 1989 log.go:181] (0xc000a2c000) (3) Data frame handling\nI0929 11:36:24.695018 1989 log.go:181] (0xc000a2c000) (3) Data frame sent\nI0929 11:36:24.695570 1989 log.go:181] (0xc0000980b0) Data frame received for 3\nI0929 11:36:24.695582 1989 log.go:181] (0xc000a2c000) (3) Data frame handling\nI0929 11:36:24.695599 1989 log.go:181] (0xc0000980b0) Data frame received for 5\nI0929 11:36:24.695628 1989 log.go:181] (0xc000a2c0a0) (5) Data frame handling\nI0929 11:36:24.695649 1989 log.go:181] (0xc000a2c0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:30476/\nI0929 11:36:24.695669 1989 log.go:181] (0xc000a2c000) (3) Data frame sent\nI0929 11:36:24.699600 1989 log.go:181] (0xc0000980b0) Data frame received for 3\nI0929 11:36:24.699625 1989 log.go:181] (0xc000a2c000) (3) Data frame handling\nI0929 11:36:24.699634 1989 log.go:181] (0xc000a2c000) (3) Data frame sent\nI0929 11:36:24.700174 1989 log.go:181] (0xc0000980b0) Data frame received for 3\nI0929 11:36:24.700188 1989 log.go:181] (0xc000a2c000) (3) Data frame handling\nI0929 11:36:24.700203 1989 log.go:181] (0xc0000980b0) Data frame received for 5\nI0929 11:36:24.700230 1989 log.go:181] (0xc000a2c0a0) (5) Data frame handling\nI0929 11:36:24.700241 1989 log.go:181] (0xc000a2c0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:30476/\nI0929 11:36:24.700258 1989 log.go:181] (0xc000a2c000) (3) Data frame sent\nI0929 11:36:24.703390 1989 log.go:181] (0xc0000980b0) Data frame received for 3\nI0929 11:36:24.703410 1989 log.go:181] (0xc000a2c000) (3) Data frame handling\nI0929 11:36:24.703423 1989 log.go:181] (0xc000a2c000) (3) Data frame sent\nI0929 11:36:24.704198 1989 log.go:181] (0xc0000980b0) Data frame received for 3\nI0929 11:36:24.704224 1989 log.go:181] (0xc000a2c000) (3) Data frame handling\nI0929 11:36:24.704377 1989 log.go:181] (0xc0000980b0) Data frame received for 5\nI0929 11:36:24.704394 1989 log.go:181] (0xc000a2c0a0) (5) Data frame handling\nI0929 11:36:24.706209 1989 log.go:181] (0xc0000980b0) Data frame received for 1\nI0929 11:36:24.706231 1989 log.go:181] (0xc0009a8140) (1) Data frame handling\nI0929 11:36:24.706255 1989 log.go:181] (0xc0009a8140) (1) Data frame sent\nI0929 11:36:24.706273 1989 log.go:181] (0xc0000980b0) (0xc0009a8140) Stream removed, broadcasting: 1\nI0929 11:36:24.706301 1989 log.go:181] (0xc0000980b0) Go away received\nI0929 11:36:24.706605 1989 log.go:181] (0xc0000980b0) (0xc0009a8140) Stream removed, broadcasting: 1\nI0929 11:36:24.706619 1989 log.go:181] (0xc0000980b0) (0xc000a2c000) Stream removed, broadcasting: 3\nI0929 11:36:24.706625 1989 log.go:181] (0xc0000980b0) (0xc000a2c0a0) Stream removed, broadcasting: 5\n" Sep 29 11:36:24.711: INFO: stdout: "\naffinity-nodeport-timeout-crc8h\naffinity-nodeport-timeout-crc8h\naffinity-nodeport-timeout-crc8h\naffinity-nodeport-timeout-crc8h\naffinity-nodeport-timeout-crc8h\naffinity-nodeport-timeout-crc8h\naffinity-nodeport-timeout-crc8h\naffinity-nodeport-timeout-crc8h\naffinity-nodeport-timeout-crc8h\naffinity-nodeport-timeout-crc8h\naffinity-nodeport-timeout-crc8h\naffinity-nodeport-timeout-crc8h\naffinity-nodeport-timeout-crc8h\naffinity-nodeport-timeout-crc8h\naffinity-nodeport-timeout-crc8h\naffinity-nodeport-timeout-crc8h" Sep 29 11:36:24.711: INFO: Received response from host: affinity-nodeport-timeout-crc8h Sep 29 11:36:24.711: INFO: Received response from host: affinity-nodeport-timeout-crc8h Sep 29 11:36:24.711: INFO: Received response from host: affinity-nodeport-timeout-crc8h Sep 29 11:36:24.711: INFO: Received response from host: affinity-nodeport-timeout-crc8h Sep 29 11:36:24.711: INFO: Received response from host: affinity-nodeport-timeout-crc8h Sep 29 11:36:24.711: INFO: Received response from host: affinity-nodeport-timeout-crc8h Sep 29 11:36:24.711: INFO: Received response from host: affinity-nodeport-timeout-crc8h Sep 29 11:36:24.711: INFO: Received response from host: affinity-nodeport-timeout-crc8h Sep 29 11:36:24.711: INFO: Received response from host: affinity-nodeport-timeout-crc8h Sep 29 11:36:24.711: INFO: Received response from host: affinity-nodeport-timeout-crc8h Sep 29 11:36:24.711: INFO: Received response from host: affinity-nodeport-timeout-crc8h Sep 29 11:36:24.711: INFO: Received response from host: affinity-nodeport-timeout-crc8h Sep 29 11:36:24.711: INFO: Received response from host: affinity-nodeport-timeout-crc8h Sep 29 11:36:24.711: INFO: Received response from host: affinity-nodeport-timeout-crc8h Sep 29 11:36:24.711: INFO: Received response from host: affinity-nodeport-timeout-crc8h Sep 29 11:36:24.711: INFO: Received response from host: affinity-nodeport-timeout-crc8h Sep 29 11:36:24.711: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-29 execpod-affinityt4n2q -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.12:30476/' Sep 29 11:36:24.922: INFO: stderr: "I0929 11:36:24.845217 2007 log.go:181] (0xc0008c3340) (0xc000e80b40) Create stream\nI0929 11:36:24.845299 2007 log.go:181] (0xc0008c3340) (0xc000e80b40) Stream added, broadcasting: 1\nI0929 11:36:24.855593 2007 log.go:181] (0xc0008c3340) Reply frame received for 1\nI0929 11:36:24.855644 2007 log.go:181] (0xc0008c3340) (0xc000ba00a0) Create stream\nI0929 11:36:24.855657 2007 log.go:181] (0xc0008c3340) (0xc000ba00a0) Stream added, broadcasting: 3\nI0929 11:36:24.856627 2007 log.go:181] (0xc0008c3340) Reply frame received for 3\nI0929 11:36:24.856658 2007 log.go:181] (0xc0008c3340) (0xc000ba0140) Create stream\nI0929 11:36:24.856671 2007 log.go:181] (0xc0008c3340) (0xc000ba0140) Stream added, broadcasting: 5\nI0929 11:36:24.857604 2007 log.go:181] (0xc0008c3340) Reply frame received for 5\nI0929 11:36:24.907954 2007 log.go:181] (0xc0008c3340) Data frame received for 5\nI0929 11:36:24.908003 2007 log.go:181] (0xc000ba0140) (5) Data frame handling\nI0929 11:36:24.908036 2007 log.go:181] (0xc000ba0140) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:30476/\nI0929 11:36:24.913386 2007 log.go:181] (0xc0008c3340) Data frame received for 3\nI0929 11:36:24.913410 2007 log.go:181] (0xc000ba00a0) (3) Data frame handling\nI0929 11:36:24.913447 2007 log.go:181] (0xc000ba00a0) (3) Data frame sent\nI0929 11:36:24.914443 2007 log.go:181] (0xc0008c3340) Data frame received for 3\nI0929 11:36:24.914473 2007 log.go:181] (0xc000ba00a0) (3) Data frame handling\nI0929 11:36:24.914489 2007 log.go:181] (0xc0008c3340) Data frame received for 5\nI0929 11:36:24.914509 2007 log.go:181] (0xc000ba0140) (5) Data frame handling\nI0929 11:36:24.915993 2007 log.go:181] (0xc0008c3340) Data frame received for 1\nI0929 11:36:24.916100 2007 log.go:181] (0xc000e80b40) (1) Data frame handling\nI0929 11:36:24.916131 2007 log.go:181] (0xc000e80b40) (1) Data frame sent\nI0929 11:36:24.916156 2007 log.go:181] (0xc0008c3340) (0xc000e80b40) Stream removed, broadcasting: 1\nI0929 11:36:24.916181 2007 log.go:181] (0xc0008c3340) Go away received\nI0929 11:36:24.916699 2007 log.go:181] (0xc0008c3340) (0xc000e80b40) Stream removed, broadcasting: 1\nI0929 11:36:24.916746 2007 log.go:181] (0xc0008c3340) (0xc000ba00a0) Stream removed, broadcasting: 3\nI0929 11:36:24.916772 2007 log.go:181] (0xc0008c3340) (0xc000ba0140) Stream removed, broadcasting: 5\n" Sep 29 11:36:24.922: INFO: stdout: "affinity-nodeport-timeout-crc8h" Sep 29 11:36:39.922: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-29 execpod-affinityt4n2q -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.12:30476/' Sep 29 11:36:40.170: INFO: stderr: "I0929 11:36:40.058949 2025 log.go:181] (0xc00095d290) (0xc0009a8820) Create stream\nI0929 11:36:40.059000 2025 log.go:181] (0xc00095d290) (0xc0009a8820) Stream added, broadcasting: 1\nI0929 11:36:40.065844 2025 log.go:181] (0xc00095d290) Reply frame received for 1\nI0929 11:36:40.065899 2025 log.go:181] (0xc00095d290) (0xc0009a8000) Create stream\nI0929 11:36:40.065911 2025 log.go:181] (0xc00095d290) (0xc0009a8000) Stream added, broadcasting: 3\nI0929 11:36:40.066865 2025 log.go:181] (0xc00095d290) Reply frame received for 3\nI0929 11:36:40.066938 2025 log.go:181] (0xc00095d290) (0xc0009a80a0) Create stream\nI0929 11:36:40.066962 2025 log.go:181] (0xc00095d290) (0xc0009a80a0) Stream added, broadcasting: 5\nI0929 11:36:40.067955 2025 log.go:181] (0xc00095d290) Reply frame received for 5\nI0929 11:36:40.155435 2025 log.go:181] (0xc00095d290) Data frame received for 5\nI0929 11:36:40.155466 2025 log.go:181] (0xc0009a80a0) (5) Data frame handling\nI0929 11:36:40.155488 2025 log.go:181] (0xc0009a80a0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:30476/\nI0929 11:36:40.161334 2025 log.go:181] (0xc00095d290) Data frame received for 3\nI0929 11:36:40.161350 2025 log.go:181] (0xc0009a8000) (3) Data frame handling\nI0929 11:36:40.161358 2025 log.go:181] (0xc0009a8000) (3) Data frame sent\nI0929 11:36:40.162090 2025 log.go:181] (0xc00095d290) Data frame received for 3\nI0929 11:36:40.162114 2025 log.go:181] (0xc0009a8000) (3) Data frame handling\nI0929 11:36:40.162246 2025 log.go:181] (0xc00095d290) Data frame received for 5\nI0929 11:36:40.162259 2025 log.go:181] (0xc0009a80a0) (5) Data frame handling\nI0929 11:36:40.164302 2025 log.go:181] (0xc00095d290) Data frame received for 1\nI0929 11:36:40.164322 2025 log.go:181] (0xc0009a8820) (1) Data frame handling\nI0929 11:36:40.164342 2025 log.go:181] (0xc0009a8820) (1) Data frame sent\nI0929 11:36:40.164357 2025 log.go:181] (0xc00095d290) (0xc0009a8820) Stream removed, broadcasting: 1\nI0929 11:36:40.164381 2025 log.go:181] (0xc00095d290) Go away received\nI0929 11:36:40.165047 2025 log.go:181] (0xc00095d290) (0xc0009a8820) Stream removed, broadcasting: 1\nI0929 11:36:40.165080 2025 log.go:181] (0xc00095d290) (0xc0009a8000) Stream removed, broadcasting: 3\nI0929 11:36:40.165095 2025 log.go:181] (0xc00095d290) (0xc0009a80a0) Stream removed, broadcasting: 5\n" Sep 29 11:36:40.171: INFO: stdout: "affinity-nodeport-timeout-nn76f" Sep 29 11:36:40.171: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-29, will wait for the garbage collector to delete the pods Sep 29 11:36:40.476: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 207.340766ms Sep 29 11:36:41.076: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 600.21566ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:36:48.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-29" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:55.777 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":303,"completed":183,"skipped":2977,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:36:48.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if v1 is in available api versions [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating api versions Sep 29 11:36:48.840: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config api-versions' Sep 29 11:36:49.051: INFO: stderr: "" Sep 29 11:36:49.051: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:36:49.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5679" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":303,"completed":184,"skipped":2979,"failed":0} SSSSSSS ------------------------------ [sig-node] PodTemplates should delete a collection of pod templates [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] PodTemplates /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:36:49.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of pod templates [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of pod templates Sep 29 11:36:49.131: INFO: created test-podtemplate-1 Sep 29 11:36:49.137: INFO: created test-podtemplate-2 Sep 29 11:36:49.140: INFO: created test-podtemplate-3 STEP: get a list of pod templates with a label in the current namespace STEP: delete collection of pod templates Sep 29 11:36:49.156: INFO: requesting DeleteCollection of pod templates STEP: check that the list of pod templates matches the requested quantity Sep 29 11:36:49.181: INFO: requesting list of pod templates to confirm quantity [AfterEach] [sig-node] PodTemplates /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:36:49.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-9376" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":303,"completed":185,"skipped":2986,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:36:49.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Sep 29 11:36:49.316: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4740 /api/v1/namespaces/watch-4740/configmaps/e2e-watch-test-label-changed 8a186746-c186-4718-a3b5-5e9a7e6bdf96 1612303 0 2020-09-29 11:36:49 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-09-29 11:36:49 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Sep 29 11:36:49.316: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4740 /api/v1/namespaces/watch-4740/configmaps/e2e-watch-test-label-changed 8a186746-c186-4718-a3b5-5e9a7e6bdf96 1612304 0 2020-09-29 11:36:49 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-09-29 11:36:49 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Sep 29 11:36:49.316: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4740 /api/v1/namespaces/watch-4740/configmaps/e2e-watch-test-label-changed 8a186746-c186-4718-a3b5-5e9a7e6bdf96 1612305 0 2020-09-29 11:36:49 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-09-29 11:36:49 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Sep 29 11:36:59.422: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4740 /api/v1/namespaces/watch-4740/configmaps/e2e-watch-test-label-changed 8a186746-c186-4718-a3b5-5e9a7e6bdf96 1612378 0 2020-09-29 11:36:49 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-09-29 11:36:59 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Sep 29 11:36:59.422: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4740 /api/v1/namespaces/watch-4740/configmaps/e2e-watch-test-label-changed 8a186746-c186-4718-a3b5-5e9a7e6bdf96 1612379 0 2020-09-29 11:36:49 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-09-29 11:36:59 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Sep 29 11:36:59.422: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4740 /api/v1/namespaces/watch-4740/configmaps/e2e-watch-test-label-changed 8a186746-c186-4718-a3b5-5e9a7e6bdf96 1612380 0 2020-09-29 11:36:49 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-09-29 11:36:59 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:36:59.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4740" for this suite. • [SLOW TEST:10.244 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":303,"completed":186,"skipped":2994,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:36:59.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 29 11:37:00.064: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 29 11:37:02.201: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736976220, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736976220, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736976220, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736976220, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 29 11:37:04.232: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736976220, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736976220, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736976220, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736976220, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 29 11:37:07.240: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:37:07.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2324" for this suite. STEP: Destroying namespace "webhook-2324-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.900 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":303,"completed":187,"skipped":3020,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:37:07.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium Sep 29 11:37:07.400: INFO: Waiting up to 5m0s for pod "pod-3a9dbf8e-eea2-4b5a-bd43-a9d9bab9f935" in namespace "emptydir-7386" to be "Succeeded or Failed" Sep 29 11:37:07.405: INFO: Pod "pod-3a9dbf8e-eea2-4b5a-bd43-a9d9bab9f935": Phase="Pending", Reason="", readiness=false. Elapsed: 4.432007ms Sep 29 11:37:09.409: INFO: Pod "pod-3a9dbf8e-eea2-4b5a-bd43-a9d9bab9f935": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008611086s Sep 29 11:37:11.413: INFO: Pod "pod-3a9dbf8e-eea2-4b5a-bd43-a9d9bab9f935": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012739744s STEP: Saw pod success Sep 29 11:37:11.413: INFO: Pod "pod-3a9dbf8e-eea2-4b5a-bd43-a9d9bab9f935" satisfied condition "Succeeded or Failed" Sep 29 11:37:11.416: INFO: Trying to get logs from node kali-worker2 pod pod-3a9dbf8e-eea2-4b5a-bd43-a9d9bab9f935 container test-container: STEP: delete the pod Sep 29 11:37:11.465: INFO: Waiting for pod pod-3a9dbf8e-eea2-4b5a-bd43-a9d9bab9f935 to disappear Sep 29 11:37:11.490: INFO: Pod pod-3a9dbf8e-eea2-4b5a-bd43-a9d9bab9f935 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:37:11.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7386" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":188,"skipped":3046,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:37:11.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should support --unix-socket=/path [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Starting the proxy Sep 29 11:37:11.556: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix052518376/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:37:11.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2515" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":303,"completed":189,"skipped":3059,"failed":0} ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:37:11.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 29 11:37:12.201: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 29 11:37:14.212: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736976232, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736976232, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736976232, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736976232, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 29 11:37:17.264: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:37:17.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-883" for this suite. STEP: Destroying namespace "webhook-883-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.761 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":303,"completed":190,"skipped":3059,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:37:17.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-8214 STEP: creating a selector STEP: Creating the service pods in kubernetes Sep 29 11:37:17.540: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Sep 29 11:37:17.803: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Sep 29 11:37:19.808: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Sep 29 11:37:21.857: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 29 11:37:23.807: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 29 11:37:25.807: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 29 11:37:27.807: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 29 11:37:29.808: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 29 11:37:31.815: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 29 11:37:33.807: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 29 11:37:35.808: INFO: The status of Pod netserver-0 is Running (Ready = true) Sep 29 11:37:35.832: INFO: The status of Pod netserver-1 is Running (Ready = false) Sep 29 11:37:37.865: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Sep 29 11:37:41.894: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.117:8080/dial?request=hostname&protocol=udp&host=10.244.2.124&port=8081&tries=1'] Namespace:pod-network-test-8214 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 29 11:37:41.894: INFO: >>> kubeConfig: /root/.kube/config I0929 11:37:41.932904 7 log.go:181] (0xc00001cb00) (0xc00333f9a0) Create stream I0929 11:37:41.932945 7 log.go:181] (0xc00001cb00) (0xc00333f9a0) Stream added, broadcasting: 1 I0929 11:37:41.935892 7 log.go:181] (0xc00001cb00) Reply frame received for 1 I0929 11:37:41.935939 7 log.go:181] (0xc00001cb00) (0xc001362000) Create stream I0929 11:37:41.935956 7 log.go:181] (0xc00001cb00) (0xc001362000) Stream added, broadcasting: 3 I0929 11:37:41.936818 7 log.go:181] (0xc00001cb00) Reply frame received for 3 I0929 11:37:41.936940 7 log.go:181] (0xc00001cb00) (0xc0068481e0) Create stream I0929 11:37:41.936961 7 log.go:181] (0xc00001cb00) (0xc0068481e0) Stream added, broadcasting: 5 I0929 11:37:41.938018 7 log.go:181] (0xc00001cb00) Reply frame received for 5 I0929 11:37:42.021051 7 log.go:181] (0xc00001cb00) Data frame received for 3 I0929 11:37:42.021088 7 log.go:181] (0xc001362000) (3) Data frame handling I0929 11:37:42.021103 7 log.go:181] (0xc001362000) (3) Data frame sent I0929 11:37:42.021774 7 log.go:181] (0xc00001cb00) Data frame received for 5 I0929 11:37:42.021825 7 log.go:181] (0xc0068481e0) (5) Data frame handling I0929 11:37:42.021851 7 log.go:181] (0xc00001cb00) Data frame received for 3 I0929 11:37:42.021872 7 log.go:181] (0xc001362000) (3) Data frame handling I0929 11:37:42.023594 7 log.go:181] (0xc00001cb00) Data frame received for 1 I0929 11:37:42.023612 7 log.go:181] (0xc00333f9a0) (1) Data frame handling I0929 11:37:42.023624 7 log.go:181] (0xc00333f9a0) (1) Data frame sent I0929 11:37:42.023642 7 log.go:181] (0xc00001cb00) (0xc00333f9a0) Stream removed, broadcasting: 1 I0929 11:37:42.023663 7 log.go:181] (0xc00001cb00) Go away received I0929 11:37:42.023745 7 log.go:181] (0xc00001cb00) (0xc00333f9a0) Stream removed, broadcasting: 1 I0929 11:37:42.023769 7 log.go:181] (0xc00001cb00) (0xc001362000) Stream removed, broadcasting: 3 I0929 11:37:42.023780 7 log.go:181] (0xc00001cb00) (0xc0068481e0) Stream removed, broadcasting: 5 Sep 29 11:37:42.023: INFO: Waiting for responses: map[] Sep 29 11:37:42.026: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.117:8080/dial?request=hostname&protocol=udp&host=10.244.1.116&port=8081&tries=1'] Namespace:pod-network-test-8214 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 29 11:37:42.026: INFO: >>> kubeConfig: /root/.kube/config I0929 11:37:42.061846 7 log.go:181] (0xc000967550) (0xc001362640) Create stream I0929 11:37:42.061872 7 log.go:181] (0xc000967550) (0xc001362640) Stream added, broadcasting: 1 I0929 11:37:42.064157 7 log.go:181] (0xc000967550) Reply frame received for 1 I0929 11:37:42.064204 7 log.go:181] (0xc000967550) (0xc00333fa40) Create stream I0929 11:37:42.064229 7 log.go:181] (0xc000967550) (0xc00333fa40) Stream added, broadcasting: 3 I0929 11:37:42.065232 7 log.go:181] (0xc000967550) Reply frame received for 3 I0929 11:37:42.065285 7 log.go:181] (0xc000967550) (0xc00333fae0) Create stream I0929 11:37:42.065302 7 log.go:181] (0xc000967550) (0xc00333fae0) Stream added, broadcasting: 5 I0929 11:37:42.066149 7 log.go:181] (0xc000967550) Reply frame received for 5 I0929 11:37:42.130346 7 log.go:181] (0xc000967550) Data frame received for 3 I0929 11:37:42.130381 7 log.go:181] (0xc00333fa40) (3) Data frame handling I0929 11:37:42.130417 7 log.go:181] (0xc00333fa40) (3) Data frame sent I0929 11:37:42.131208 7 log.go:181] (0xc000967550) Data frame received for 3 I0929 11:37:42.131247 7 log.go:181] (0xc00333fa40) (3) Data frame handling I0929 11:37:42.131284 7 log.go:181] (0xc000967550) Data frame received for 5 I0929 11:37:42.131312 7 log.go:181] (0xc00333fae0) (5) Data frame handling I0929 11:37:42.132964 7 log.go:181] (0xc000967550) Data frame received for 1 I0929 11:37:42.132991 7 log.go:181] (0xc001362640) (1) Data frame handling I0929 11:37:42.133023 7 log.go:181] (0xc001362640) (1) Data frame sent I0929 11:37:42.133053 7 log.go:181] (0xc000967550) (0xc001362640) Stream removed, broadcasting: 1 I0929 11:37:42.133100 7 log.go:181] (0xc000967550) Go away received I0929 11:37:42.133181 7 log.go:181] (0xc000967550) (0xc001362640) Stream removed, broadcasting: 1 I0929 11:37:42.133203 7 log.go:181] (0xc000967550) (0xc00333fa40) Stream removed, broadcasting: 3 I0929 11:37:42.133217 7 log.go:181] (0xc000967550) (0xc00333fae0) Stream removed, broadcasting: 5 Sep 29 11:37:42.133: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:37:42.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8214" for this suite. • [SLOW TEST:24.745 seconds] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":303,"completed":191,"skipped":3070,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:37:42.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Sep 29 11:37:52.429: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3226 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 29 11:37:52.429: INFO: >>> kubeConfig: /root/.kube/config I0929 11:37:52.468767 7 log.go:181] (0xc003ab0b00) (0xc00226cf00) Create stream I0929 11:37:52.468797 7 log.go:181] (0xc003ab0b00) (0xc00226cf00) Stream added, broadcasting: 1 I0929 11:37:52.471639 7 log.go:181] (0xc003ab0b00) Reply frame received for 1 I0929 11:37:52.471745 7 log.go:181] (0xc003ab0b00) (0xc00333fb80) Create stream I0929 11:37:52.471774 7 log.go:181] (0xc003ab0b00) (0xc00333fb80) Stream added, broadcasting: 3 I0929 11:37:52.473134 7 log.go:181] (0xc003ab0b00) Reply frame received for 3 I0929 11:37:52.473178 7 log.go:181] (0xc003ab0b00) (0xc000f42780) Create stream I0929 11:37:52.473195 7 log.go:181] (0xc003ab0b00) (0xc000f42780) Stream added, broadcasting: 5 I0929 11:37:52.474222 7 log.go:181] (0xc003ab0b00) Reply frame received for 5 I0929 11:37:52.559346 7 log.go:181] (0xc003ab0b00) Data frame received for 5 I0929 11:37:52.559379 7 log.go:181] (0xc003ab0b00) Data frame received for 3 I0929 11:37:52.559407 7 log.go:181] (0xc00333fb80) (3) Data frame handling I0929 11:37:52.559432 7 log.go:181] (0xc00333fb80) (3) Data frame sent I0929 11:37:52.559451 7 log.go:181] (0xc003ab0b00) Data frame received for 3 I0929 11:37:52.559466 7 log.go:181] (0xc00333fb80) (3) Data frame handling I0929 11:37:52.559505 7 log.go:181] (0xc000f42780) (5) Data frame handling I0929 11:37:52.561152 7 log.go:181] (0xc003ab0b00) Data frame received for 1 I0929 11:37:52.561192 7 log.go:181] (0xc00226cf00) (1) Data frame handling I0929 11:37:52.561225 7 log.go:181] (0xc00226cf00) (1) Data frame sent I0929 11:37:52.561241 7 log.go:181] (0xc003ab0b00) (0xc00226cf00) Stream removed, broadcasting: 1 I0929 11:37:52.561259 7 log.go:181] (0xc003ab0b00) Go away received I0929 11:37:52.561446 7 log.go:181] (0xc003ab0b00) (0xc00226cf00) Stream removed, broadcasting: 1 I0929 11:37:52.561478 7 log.go:181] (0xc003ab0b00) (0xc00333fb80) Stream removed, broadcasting: 3 I0929 11:37:52.561487 7 log.go:181] (0xc003ab0b00) (0xc000f42780) Stream removed, broadcasting: 5 Sep 29 11:37:52.561: INFO: Exec stderr: "" Sep 29 11:37:52.561: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3226 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 29 11:37:52.561: INFO: >>> kubeConfig: /root/.kube/config I0929 11:37:52.620829 7 log.go:181] (0xc000192f20) (0xc00375e000) Create stream I0929 11:37:52.620948 7 log.go:181] (0xc000192f20) (0xc00375e000) Stream added, broadcasting: 1 I0929 11:37:52.623763 7 log.go:181] (0xc000192f20) Reply frame received for 1 I0929 11:37:52.623790 7 log.go:181] (0xc000192f20) (0xc00333fc20) Create stream I0929 11:37:52.623799 7 log.go:181] (0xc000192f20) (0xc00333fc20) Stream added, broadcasting: 3 I0929 11:37:52.624739 7 log.go:181] (0xc000192f20) Reply frame received for 3 I0929 11:37:52.624777 7 log.go:181] (0xc000192f20) (0xc006848280) Create stream I0929 11:37:52.624784 7 log.go:181] (0xc000192f20) (0xc006848280) Stream added, broadcasting: 5 I0929 11:37:52.625757 7 log.go:181] (0xc000192f20) Reply frame received for 5 I0929 11:37:52.688029 7 log.go:181] (0xc000192f20) Data frame received for 5 I0929 11:37:52.688076 7 log.go:181] (0xc006848280) (5) Data frame handling I0929 11:37:52.688108 7 log.go:181] (0xc000192f20) Data frame received for 3 I0929 11:37:52.688133 7 log.go:181] (0xc00333fc20) (3) Data frame handling I0929 11:37:52.688162 7 log.go:181] (0xc00333fc20) (3) Data frame sent I0929 11:37:52.688181 7 log.go:181] (0xc000192f20) Data frame received for 3 I0929 11:37:52.688202 7 log.go:181] (0xc00333fc20) (3) Data frame handling I0929 11:37:52.691599 7 log.go:181] (0xc000192f20) Data frame received for 1 I0929 11:37:52.691638 7 log.go:181] (0xc00375e000) (1) Data frame handling I0929 11:37:52.691664 7 log.go:181] (0xc00375e000) (1) Data frame sent I0929 11:37:52.691712 7 log.go:181] (0xc000192f20) (0xc00375e000) Stream removed, broadcasting: 1 I0929 11:37:52.691739 7 log.go:181] (0xc000192f20) Go away received I0929 11:37:52.691836 7 log.go:181] (0xc000192f20) (0xc00375e000) Stream removed, broadcasting: 1 I0929 11:37:52.691862 7 log.go:181] (0xc000192f20) (0xc00333fc20) Stream removed, broadcasting: 3 I0929 11:37:52.691871 7 log.go:181] (0xc000192f20) (0xc006848280) Stream removed, broadcasting: 5 Sep 29 11:37:52.691: INFO: Exec stderr: "" Sep 29 11:37:52.691: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3226 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 29 11:37:52.691: INFO: >>> kubeConfig: /root/.kube/config I0929 11:37:52.721439 7 log.go:181] (0xc00331e8f0) (0xc00375e3c0) Create stream I0929 11:37:52.721465 7 log.go:181] (0xc00331e8f0) (0xc00375e3c0) Stream added, broadcasting: 1 I0929 11:37:52.725170 7 log.go:181] (0xc00331e8f0) Reply frame received for 1 I0929 11:37:52.725215 7 log.go:181] (0xc00331e8f0) (0xc00333fd60) Create stream I0929 11:37:52.725230 7 log.go:181] (0xc00331e8f0) (0xc00333fd60) Stream added, broadcasting: 3 I0929 11:37:52.726255 7 log.go:181] (0xc00331e8f0) Reply frame received for 3 I0929 11:37:52.726294 7 log.go:181] (0xc00331e8f0) (0xc00226cfa0) Create stream I0929 11:37:52.726308 7 log.go:181] (0xc00331e8f0) (0xc00226cfa0) Stream added, broadcasting: 5 I0929 11:37:52.727251 7 log.go:181] (0xc00331e8f0) Reply frame received for 5 I0929 11:37:52.786844 7 log.go:181] (0xc00331e8f0) Data frame received for 5 I0929 11:37:52.786892 7 log.go:181] (0xc00226cfa0) (5) Data frame handling I0929 11:37:52.786922 7 log.go:181] (0xc00331e8f0) Data frame received for 3 I0929 11:37:52.786938 7 log.go:181] (0xc00333fd60) (3) Data frame handling I0929 11:37:52.786959 7 log.go:181] (0xc00333fd60) (3) Data frame sent I0929 11:37:52.787030 7 log.go:181] (0xc00331e8f0) Data frame received for 3 I0929 11:37:52.787049 7 log.go:181] (0xc00333fd60) (3) Data frame handling I0929 11:37:52.788289 7 log.go:181] (0xc00331e8f0) Data frame received for 1 I0929 11:37:52.788312 7 log.go:181] (0xc00375e3c0) (1) Data frame handling I0929 11:37:52.788334 7 log.go:181] (0xc00375e3c0) (1) Data frame sent I0929 11:37:52.788356 7 log.go:181] (0xc00331e8f0) (0xc00375e3c0) Stream removed, broadcasting: 1 I0929 11:37:52.788451 7 log.go:181] (0xc00331e8f0) (0xc00375e3c0) Stream removed, broadcasting: 1 I0929 11:37:52.788472 7 log.go:181] (0xc00331e8f0) (0xc00333fd60) Stream removed, broadcasting: 3 I0929 11:37:52.788509 7 log.go:181] (0xc00331e8f0) Go away received I0929 11:37:52.788590 7 log.go:181] (0xc00331e8f0) (0xc00226cfa0) Stream removed, broadcasting: 5 Sep 29 11:37:52.788: INFO: Exec stderr: "" Sep 29 11:37:52.788: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3226 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 29 11:37:52.788: INFO: >>> kubeConfig: /root/.kube/config I0929 11:37:52.814289 7 log.go:181] (0xc003ab11e0) (0xc00226d2c0) Create stream I0929 11:37:52.814323 7 log.go:181] (0xc003ab11e0) (0xc00226d2c0) Stream added, broadcasting: 1 I0929 11:37:52.817095 7 log.go:181] (0xc003ab11e0) Reply frame received for 1 I0929 11:37:52.817135 7 log.go:181] (0xc003ab11e0) (0xc003533c20) Create stream I0929 11:37:52.817147 7 log.go:181] (0xc003ab11e0) (0xc003533c20) Stream added, broadcasting: 3 I0929 11:37:52.818211 7 log.go:181] (0xc003ab11e0) Reply frame received for 3 I0929 11:37:52.818244 7 log.go:181] (0xc003ab11e0) (0xc00226d400) Create stream I0929 11:37:52.818254 7 log.go:181] (0xc003ab11e0) (0xc00226d400) Stream added, broadcasting: 5 I0929 11:37:52.819026 7 log.go:181] (0xc003ab11e0) Reply frame received for 5 I0929 11:37:52.900396 7 log.go:181] (0xc003ab11e0) Data frame received for 5 I0929 11:37:52.900434 7 log.go:181] (0xc00226d400) (5) Data frame handling I0929 11:37:52.900457 7 log.go:181] (0xc003ab11e0) Data frame received for 3 I0929 11:37:52.900469 7 log.go:181] (0xc003533c20) (3) Data frame handling I0929 11:37:52.900483 7 log.go:181] (0xc003533c20) (3) Data frame sent I0929 11:37:52.900499 7 log.go:181] (0xc003ab11e0) Data frame received for 3 I0929 11:37:52.900508 7 log.go:181] (0xc003533c20) (3) Data frame handling I0929 11:37:52.901874 7 log.go:181] (0xc003ab11e0) Data frame received for 1 I0929 11:37:52.901905 7 log.go:181] (0xc00226d2c0) (1) Data frame handling I0929 11:37:52.901913 7 log.go:181] (0xc00226d2c0) (1) Data frame sent I0929 11:37:52.901920 7 log.go:181] (0xc003ab11e0) (0xc00226d2c0) Stream removed, broadcasting: 1 I0929 11:37:52.901933 7 log.go:181] (0xc003ab11e0) Go away received I0929 11:37:52.902049 7 log.go:181] (0xc003ab11e0) (0xc00226d2c0) Stream removed, broadcasting: 1 I0929 11:37:52.902067 7 log.go:181] (0xc003ab11e0) (0xc003533c20) Stream removed, broadcasting: 3 I0929 11:37:52.902077 7 log.go:181] (0xc003ab11e0) (0xc00226d400) Stream removed, broadcasting: 5 Sep 29 11:37:52.902: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Sep 29 11:37:52.902: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3226 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 29 11:37:52.902: INFO: >>> kubeConfig: /root/.kube/config I0929 11:37:52.971276 7 log.go:181] (0xc00331f080) (0xc00375e6e0) Create stream I0929 11:37:52.971302 7 log.go:181] (0xc00331f080) (0xc00375e6e0) Stream added, broadcasting: 1 I0929 11:37:52.974201 7 log.go:181] (0xc00331f080) Reply frame received for 1 I0929 11:37:52.974244 7 log.go:181] (0xc00331f080) (0xc00375e780) Create stream I0929 11:37:52.974254 7 log.go:181] (0xc00331f080) (0xc00375e780) Stream added, broadcasting: 3 I0929 11:37:52.975223 7 log.go:181] (0xc00331f080) Reply frame received for 3 I0929 11:37:52.975250 7 log.go:181] (0xc00331f080) (0xc00375e820) Create stream I0929 11:37:52.975256 7 log.go:181] (0xc00331f080) (0xc00375e820) Stream added, broadcasting: 5 I0929 11:37:52.976105 7 log.go:181] (0xc00331f080) Reply frame received for 5 I0929 11:37:53.028429 7 log.go:181] (0xc00331f080) Data frame received for 5 I0929 11:37:53.028466 7 log.go:181] (0xc00375e820) (5) Data frame handling I0929 11:37:53.028493 7 log.go:181] (0xc00331f080) Data frame received for 3 I0929 11:37:53.028505 7 log.go:181] (0xc00375e780) (3) Data frame handling I0929 11:37:53.028526 7 log.go:181] (0xc00375e780) (3) Data frame sent I0929 11:37:53.028546 7 log.go:181] (0xc00331f080) Data frame received for 3 I0929 11:37:53.028557 7 log.go:181] (0xc00375e780) (3) Data frame handling I0929 11:37:53.030153 7 log.go:181] (0xc00331f080) Data frame received for 1 I0929 11:37:53.030185 7 log.go:181] (0xc00375e6e0) (1) Data frame handling I0929 11:37:53.030200 7 log.go:181] (0xc00375e6e0) (1) Data frame sent I0929 11:37:53.030224 7 log.go:181] (0xc00331f080) (0xc00375e6e0) Stream removed, broadcasting: 1 I0929 11:37:53.030260 7 log.go:181] (0xc00331f080) Go away received I0929 11:37:53.030366 7 log.go:181] (0xc00331f080) (0xc00375e6e0) Stream removed, broadcasting: 1 I0929 11:37:53.030383 7 log.go:181] (0xc00331f080) (0xc00375e780) Stream removed, broadcasting: 3 I0929 11:37:53.030394 7 log.go:181] (0xc00331f080) (0xc00375e820) Stream removed, broadcasting: 5 Sep 29 11:37:53.030: INFO: Exec stderr: "" Sep 29 11:37:53.030: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3226 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 29 11:37:53.030: INFO: >>> kubeConfig: /root/.kube/config I0929 11:37:53.064161 7 log.go:181] (0xc00331f760) (0xc00375eaa0) Create stream I0929 11:37:53.064185 7 log.go:181] (0xc00331f760) (0xc00375eaa0) Stream added, broadcasting: 1 I0929 11:37:53.066810 7 log.go:181] (0xc00331f760) Reply frame received for 1 I0929 11:37:53.066843 7 log.go:181] (0xc00331f760) (0xc006848320) Create stream I0929 11:37:53.066850 7 log.go:181] (0xc00331f760) (0xc006848320) Stream added, broadcasting: 3 I0929 11:37:53.067743 7 log.go:181] (0xc00331f760) Reply frame received for 3 I0929 11:37:53.067800 7 log.go:181] (0xc00331f760) (0xc00333fea0) Create stream I0929 11:37:53.067824 7 log.go:181] (0xc00331f760) (0xc00333fea0) Stream added, broadcasting: 5 I0929 11:37:53.068753 7 log.go:181] (0xc00331f760) Reply frame received for 5 I0929 11:37:53.129519 7 log.go:181] (0xc00331f760) Data frame received for 5 I0929 11:37:53.129574 7 log.go:181] (0xc00333fea0) (5) Data frame handling I0929 11:37:53.129648 7 log.go:181] (0xc00331f760) Data frame received for 3 I0929 11:37:53.129693 7 log.go:181] (0xc006848320) (3) Data frame handling I0929 11:37:53.129735 7 log.go:181] (0xc006848320) (3) Data frame sent I0929 11:37:53.129760 7 log.go:181] (0xc00331f760) Data frame received for 3 I0929 11:37:53.129782 7 log.go:181] (0xc006848320) (3) Data frame handling I0929 11:37:53.134854 7 log.go:181] (0xc00331f760) Data frame received for 1 I0929 11:37:53.134911 7 log.go:181] (0xc00375eaa0) (1) Data frame handling I0929 11:37:53.134937 7 log.go:181] (0xc00375eaa0) (1) Data frame sent I0929 11:37:53.134962 7 log.go:181] (0xc00331f760) (0xc00375eaa0) Stream removed, broadcasting: 1 I0929 11:37:53.134992 7 log.go:181] (0xc00331f760) Go away received I0929 11:37:53.135192 7 log.go:181] (0xc00331f760) (0xc00375eaa0) Stream removed, broadcasting: 1 I0929 11:37:53.135233 7 log.go:181] (0xc00331f760) (0xc006848320) Stream removed, broadcasting: 3 I0929 11:37:53.135260 7 log.go:181] (0xc00331f760) (0xc00333fea0) Stream removed, broadcasting: 5 Sep 29 11:37:53.135: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Sep 29 11:37:53.135: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3226 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 29 11:37:53.135: INFO: >>> kubeConfig: /root/.kube/config I0929 11:37:53.164688 7 log.go:181] (0xc000914e70) (0xc0068486e0) Create stream I0929 11:37:53.164712 7 log.go:181] (0xc000914e70) (0xc0068486e0) Stream added, broadcasting: 1 I0929 11:37:53.170276 7 log.go:181] (0xc000914e70) Reply frame received for 1 I0929 11:37:53.170317 7 log.go:181] (0xc000914e70) (0xc00375eb40) Create stream I0929 11:37:53.170331 7 log.go:181] (0xc000914e70) (0xc00375eb40) Stream added, broadcasting: 3 I0929 11:37:53.171490 7 log.go:181] (0xc000914e70) Reply frame received for 3 I0929 11:37:53.171530 7 log.go:181] (0xc000914e70) (0xc002446000) Create stream I0929 11:37:53.171543 7 log.go:181] (0xc000914e70) (0xc002446000) Stream added, broadcasting: 5 I0929 11:37:53.172454 7 log.go:181] (0xc000914e70) Reply frame received for 5 I0929 11:37:53.240464 7 log.go:181] (0xc000914e70) Data frame received for 5 I0929 11:37:53.240512 7 log.go:181] (0xc002446000) (5) Data frame handling I0929 11:37:53.240538 7 log.go:181] (0xc000914e70) Data frame received for 3 I0929 11:37:53.240554 7 log.go:181] (0xc00375eb40) (3) Data frame handling I0929 11:37:53.240566 7 log.go:181] (0xc00375eb40) (3) Data frame sent I0929 11:37:53.240581 7 log.go:181] (0xc000914e70) Data frame received for 3 I0929 11:37:53.240600 7 log.go:181] (0xc00375eb40) (3) Data frame handling I0929 11:37:53.243265 7 log.go:181] (0xc000914e70) Data frame received for 1 I0929 11:37:53.243307 7 log.go:181] (0xc0068486e0) (1) Data frame handling I0929 11:37:53.243328 7 log.go:181] (0xc0068486e0) (1) Data frame sent I0929 11:37:53.243346 7 log.go:181] (0xc000914e70) (0xc0068486e0) Stream removed, broadcasting: 1 I0929 11:37:53.243371 7 log.go:181] (0xc000914e70) Go away received I0929 11:37:53.243524 7 log.go:181] (0xc000914e70) (0xc0068486e0) Stream removed, broadcasting: 1 I0929 11:37:53.243553 7 log.go:181] (0xc000914e70) (0xc00375eb40) Stream removed, broadcasting: 3 I0929 11:37:53.243566 7 log.go:181] (0xc000914e70) (0xc002446000) Stream removed, broadcasting: 5 Sep 29 11:37:53.243: INFO: Exec stderr: "" Sep 29 11:37:53.243: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3226 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 29 11:37:53.243: INFO: >>> kubeConfig: /root/.kube/config I0929 11:37:53.271438 7 log.go:181] (0xc003ab18c0) (0xc00226d680) Create stream I0929 11:37:53.271464 7 log.go:181] (0xc003ab18c0) (0xc00226d680) Stream added, broadcasting: 1 I0929 11:37:53.274456 7 log.go:181] (0xc003ab18c0) Reply frame received for 1 I0929 11:37:53.274547 7 log.go:181] (0xc003ab18c0) (0xc002446320) Create stream I0929 11:37:53.274577 7 log.go:181] (0xc003ab18c0) (0xc002446320) Stream added, broadcasting: 3 I0929 11:37:53.275554 7 log.go:181] (0xc003ab18c0) Reply frame received for 3 I0929 11:37:53.275597 7 log.go:181] (0xc003ab18c0) (0xc0024463c0) Create stream I0929 11:37:53.275611 7 log.go:181] (0xc003ab18c0) (0xc0024463c0) Stream added, broadcasting: 5 I0929 11:37:53.276742 7 log.go:181] (0xc003ab18c0) Reply frame received for 5 I0929 11:37:53.366933 7 log.go:181] (0xc003ab18c0) Data frame received for 3 I0929 11:37:53.366975 7 log.go:181] (0xc002446320) (3) Data frame handling I0929 11:37:53.366985 7 log.go:181] (0xc002446320) (3) Data frame sent I0929 11:37:53.366991 7 log.go:181] (0xc003ab18c0) Data frame received for 3 I0929 11:37:53.366995 7 log.go:181] (0xc002446320) (3) Data frame handling I0929 11:37:53.367089 7 log.go:181] (0xc003ab18c0) Data frame received for 5 I0929 11:37:53.367130 7 log.go:181] (0xc0024463c0) (5) Data frame handling I0929 11:37:53.368322 7 log.go:181] (0xc003ab18c0) Data frame received for 1 I0929 11:37:53.368351 7 log.go:181] (0xc00226d680) (1) Data frame handling I0929 11:37:53.368385 7 log.go:181] (0xc00226d680) (1) Data frame sent I0929 11:37:53.368418 7 log.go:181] (0xc003ab18c0) (0xc00226d680) Stream removed, broadcasting: 1 I0929 11:37:53.368453 7 log.go:181] (0xc003ab18c0) Go away received I0929 11:37:53.368511 7 log.go:181] (0xc003ab18c0) (0xc00226d680) Stream removed, broadcasting: 1 I0929 11:37:53.368523 7 log.go:181] (0xc003ab18c0) (0xc002446320) Stream removed, broadcasting: 3 I0929 11:37:53.368529 7 log.go:181] (0xc003ab18c0) (0xc0024463c0) Stream removed, broadcasting: 5 Sep 29 11:37:53.368: INFO: Exec stderr: "" Sep 29 11:37:53.368: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3226 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 29 11:37:53.368: INFO: >>> kubeConfig: /root/.kube/config I0929 11:37:53.398672 7 log.go:181] (0xc003fb4420) (0xc0030ac000) Create stream I0929 11:37:53.398703 7 log.go:181] (0xc003fb4420) (0xc0030ac000) Stream added, broadcasting: 1 I0929 11:37:53.401440 7 log.go:181] (0xc003fb4420) Reply frame received for 1 I0929 11:37:53.401506 7 log.go:181] (0xc003fb4420) (0xc00226d720) Create stream I0929 11:37:53.401527 7 log.go:181] (0xc003fb4420) (0xc00226d720) Stream added, broadcasting: 3 I0929 11:37:53.402680 7 log.go:181] (0xc003fb4420) Reply frame received for 3 I0929 11:37:53.402707 7 log.go:181] (0xc003fb4420) (0xc006848820) Create stream I0929 11:37:53.402730 7 log.go:181] (0xc003fb4420) (0xc006848820) Stream added, broadcasting: 5 I0929 11:37:53.403803 7 log.go:181] (0xc003fb4420) Reply frame received for 5 I0929 11:37:53.468291 7 log.go:181] (0xc003fb4420) Data frame received for 5 I0929 11:37:53.468334 7 log.go:181] (0xc006848820) (5) Data frame handling I0929 11:37:53.468384 7 log.go:181] (0xc003fb4420) Data frame received for 3 I0929 11:37:53.468425 7 log.go:181] (0xc00226d720) (3) Data frame handling I0929 11:37:53.468454 7 log.go:181] (0xc00226d720) (3) Data frame sent I0929 11:37:53.468473 7 log.go:181] (0xc003fb4420) Data frame received for 3 I0929 11:37:53.468488 7 log.go:181] (0xc00226d720) (3) Data frame handling I0929 11:37:53.470131 7 log.go:181] (0xc003fb4420) Data frame received for 1 I0929 11:37:53.470161 7 log.go:181] (0xc0030ac000) (1) Data frame handling I0929 11:37:53.470190 7 log.go:181] (0xc0030ac000) (1) Data frame sent I0929 11:37:53.470251 7 log.go:181] (0xc003fb4420) (0xc0030ac000) Stream removed, broadcasting: 1 I0929 11:37:53.470273 7 log.go:181] (0xc003fb4420) Go away received I0929 11:37:53.470382 7 log.go:181] (0xc003fb4420) (0xc0030ac000) Stream removed, broadcasting: 1 I0929 11:37:53.470437 7 log.go:181] (0xc003fb4420) (0xc00226d720) Stream removed, broadcasting: 3 I0929 11:37:53.470465 7 log.go:181] (0xc003fb4420) (0xc006848820) Stream removed, broadcasting: 5 Sep 29 11:37:53.470: INFO: Exec stderr: "" Sep 29 11:37:53.470: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3226 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 29 11:37:53.470: INFO: >>> kubeConfig: /root/.kube/config I0929 11:37:53.502491 7 log.go:181] (0xc000915550) (0xc006848aa0) Create stream I0929 11:37:53.502514 7 log.go:181] (0xc000915550) (0xc006848aa0) Stream added, broadcasting: 1 I0929 11:37:53.505885 7 log.go:181] (0xc000915550) Reply frame received for 1 I0929 11:37:53.505935 7 log.go:181] (0xc000915550) (0xc006848b40) Create stream I0929 11:37:53.505953 7 log.go:181] (0xc000915550) (0xc006848b40) Stream added, broadcasting: 3 I0929 11:37:53.507056 7 log.go:181] (0xc000915550) Reply frame received for 3 I0929 11:37:53.507089 7 log.go:181] (0xc000915550) (0xc00226d7c0) Create stream I0929 11:37:53.507102 7 log.go:181] (0xc000915550) (0xc00226d7c0) Stream added, broadcasting: 5 I0929 11:37:53.508289 7 log.go:181] (0xc000915550) Reply frame received for 5 I0929 11:37:53.571426 7 log.go:181] (0xc000915550) Data frame received for 3 I0929 11:37:53.571457 7 log.go:181] (0xc006848b40) (3) Data frame handling I0929 11:37:53.571477 7 log.go:181] (0xc006848b40) (3) Data frame sent I0929 11:37:53.571614 7 log.go:181] (0xc000915550) Data frame received for 5 I0929 11:37:53.571625 7 log.go:181] (0xc00226d7c0) (5) Data frame handling I0929 11:37:53.571653 7 log.go:181] (0xc000915550) Data frame received for 3 I0929 11:37:53.571683 7 log.go:181] (0xc006848b40) (3) Data frame handling I0929 11:37:53.573351 7 log.go:181] (0xc000915550) Data frame received for 1 I0929 11:37:53.573373 7 log.go:181] (0xc006848aa0) (1) Data frame handling I0929 11:37:53.573381 7 log.go:181] (0xc006848aa0) (1) Data frame sent I0929 11:37:53.573507 7 log.go:181] (0xc000915550) (0xc006848aa0) Stream removed, broadcasting: 1 I0929 11:37:53.573581 7 log.go:181] (0xc000915550) Go away received I0929 11:37:53.573646 7 log.go:181] (0xc000915550) (0xc006848aa0) Stream removed, broadcasting: 1 I0929 11:37:53.573672 7 log.go:181] (0xc000915550) (0xc006848b40) Stream removed, broadcasting: 3 I0929 11:37:53.573680 7 log.go:181] (0xc000915550) (0xc00226d7c0) Stream removed, broadcasting: 5 Sep 29 11:37:53.573: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:37:53.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-3226" for this suite. • [SLOW TEST:11.448 seconds] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":192,"skipped":3087,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:37:53.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Sep 29 11:37:53.668: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Sep 29 11:37:53.682: INFO: Waiting for terminating namespaces to be deleted... Sep 29 11:37:53.685: INFO: Logging pods the apiserver thinks is on node kali-worker before test Sep 29 11:37:53.709: INFO: test-pod from e2e-kubelet-etc-hosts-3226 started at 2020-09-29 11:37:42 +0000 UTC (3 container statuses recorded) Sep 29 11:37:53.709: INFO: Container busybox-1 ready: true, restart count 0 Sep 29 11:37:53.709: INFO: Container busybox-2 ready: true, restart count 0 Sep 29 11:37:53.709: INFO: Container busybox-3 ready: true, restart count 0 Sep 29 11:37:53.709: INFO: kindnet-pdv4j from kube-system started at 2020-09-23 08:29:08 +0000 UTC (1 container statuses recorded) Sep 29 11:37:53.709: INFO: Container kindnet-cni ready: true, restart count 0 Sep 29 11:37:53.709: INFO: kube-proxy-qsqz8 from kube-system started at 2020-09-23 08:29:08 +0000 UTC (1 container statuses recorded) Sep 29 11:37:53.709: INFO: Container kube-proxy ready: true, restart count 0 Sep 29 11:37:53.709: INFO: netserver-0 from pod-network-test-8214 started at 2020-09-29 11:37:17 +0000 UTC (1 container statuses recorded) Sep 29 11:37:53.709: INFO: Container webserver ready: false, restart count 0 Sep 29 11:37:53.709: INFO: Logging pods the apiserver thinks is on node kali-worker2 before test Sep 29 11:37:53.716: INFO: test-host-network-pod from e2e-kubelet-etc-hosts-3226 started at 2020-09-29 11:37:48 +0000 UTC (2 container statuses recorded) Sep 29 11:37:53.716: INFO: Container busybox-1 ready: true, restart count 0 Sep 29 11:37:53.716: INFO: Container busybox-2 ready: true, restart count 0 Sep 29 11:37:53.716: INFO: kindnet-pgjc7 from kube-system started at 2020-09-23 08:29:08 +0000 UTC (1 container statuses recorded) Sep 29 11:37:53.716: INFO: Container kindnet-cni ready: true, restart count 0 Sep 29 11:37:53.716: INFO: kube-proxy-qhsmg from kube-system started at 2020-09-23 08:29:08 +0000 UTC (1 container statuses recorded) Sep 29 11:37:53.716: INFO: Container kube-proxy ready: true, restart count 0 Sep 29 11:37:53.716: INFO: netserver-1 from pod-network-test-8214 started at 2020-09-29 11:37:17 +0000 UTC (1 container statuses recorded) Sep 29 11:37:53.716: INFO: Container webserver ready: false, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16393e259b5cf1f4], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.16393e259ccaf574], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:37:54.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9487" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":303,"completed":193,"skipped":3122,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:37:54.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 29 11:39:55.001: INFO: Deleting pod "var-expansion-b2abfc1d-1ddb-4294-abca-ca0265208b00" in namespace "var-expansion-3211" Sep 29 11:39:55.005: INFO: Wait up to 5m0s for pod "var-expansion-b2abfc1d-1ddb-4294-abca-ca0265208b00" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:39:59.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3211" for this suite. • [SLOW TEST:124.290 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":303,"completed":194,"skipped":3162,"failed":0} SSSS ------------------------------ [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:39:59.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-8207 STEP: creating service affinity-nodeport-transition in namespace services-8207 STEP: creating replication controller affinity-nodeport-transition in namespace services-8207 I0929 11:39:59.151127 7 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-8207, replica count: 3 I0929 11:40:02.201535 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0929 11:40:05.201759 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 29 11:40:05.213: INFO: Creating new exec pod Sep 29 11:40:10.236: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-8207 execpod-affinity5lk9b -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80' Sep 29 11:40:10.459: INFO: stderr: "I0929 11:40:10.384035 2077 log.go:181] (0xc000eb0fd0) (0xc000f98960) Create stream\nI0929 11:40:10.384104 2077 log.go:181] (0xc000eb0fd0) (0xc000f98960) Stream added, broadcasting: 1\nI0929 11:40:10.391106 2077 log.go:181] (0xc000eb0fd0) Reply frame received for 1\nI0929 11:40:10.391174 2077 log.go:181] (0xc000eb0fd0) (0xc000a1c1e0) Create stream\nI0929 11:40:10.391183 2077 log.go:181] (0xc000eb0fd0) (0xc000a1c1e0) Stream added, broadcasting: 3\nI0929 11:40:10.392010 2077 log.go:181] (0xc000eb0fd0) Reply frame received for 3\nI0929 11:40:10.392042 2077 log.go:181] (0xc000eb0fd0) (0xc000f98000) Create stream\nI0929 11:40:10.392052 2077 log.go:181] (0xc000eb0fd0) (0xc000f98000) Stream added, broadcasting: 5\nI0929 11:40:10.393263 2077 log.go:181] (0xc000eb0fd0) Reply frame received for 5\nI0929 11:40:10.452357 2077 log.go:181] (0xc000eb0fd0) Data frame received for 5\nI0929 11:40:10.452390 2077 log.go:181] (0xc000f98000) (5) Data frame handling\nI0929 11:40:10.452416 2077 log.go:181] (0xc000f98000) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-transition 80\nI0929 11:40:10.452899 2077 log.go:181] (0xc000eb0fd0) Data frame received for 5\nI0929 11:40:10.452929 2077 log.go:181] (0xc000f98000) (5) Data frame handling\nI0929 11:40:10.452950 2077 log.go:181] (0xc000f98000) (5) Data frame sent\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\nI0929 11:40:10.453176 2077 log.go:181] (0xc000eb0fd0) Data frame received for 5\nI0929 11:40:10.453207 2077 log.go:181] (0xc000f98000) (5) Data frame handling\nI0929 11:40:10.453383 2077 log.go:181] (0xc000eb0fd0) Data frame received for 3\nI0929 11:40:10.453395 2077 log.go:181] (0xc000a1c1e0) (3) Data frame handling\nI0929 11:40:10.455166 2077 log.go:181] (0xc000eb0fd0) Data frame received for 1\nI0929 11:40:10.455188 2077 log.go:181] (0xc000f98960) (1) Data frame handling\nI0929 11:40:10.455198 2077 log.go:181] (0xc000f98960) (1) Data frame sent\nI0929 11:40:10.455209 2077 log.go:181] (0xc000eb0fd0) (0xc000f98960) Stream removed, broadcasting: 1\nI0929 11:40:10.455308 2077 log.go:181] (0xc000eb0fd0) Go away received\nI0929 11:40:10.455522 2077 log.go:181] (0xc000eb0fd0) (0xc000f98960) Stream removed, broadcasting: 1\nI0929 11:40:10.455539 2077 log.go:181] (0xc000eb0fd0) (0xc000a1c1e0) Stream removed, broadcasting: 3\nI0929 11:40:10.455544 2077 log.go:181] (0xc000eb0fd0) (0xc000f98000) Stream removed, broadcasting: 5\n" Sep 29 11:40:10.459: INFO: stdout: "" Sep 29 11:40:10.459: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-8207 execpod-affinity5lk9b -- /bin/sh -x -c nc -zv -t -w 2 10.102.25.97 80' Sep 29 11:40:10.654: INFO: stderr: "I0929 11:40:10.586759 2095 log.go:181] (0xc0009cd760) (0xc0009c4aa0) Create stream\nI0929 11:40:10.586830 2095 log.go:181] (0xc0009cd760) (0xc0009c4aa0) Stream added, broadcasting: 1\nI0929 11:40:10.591916 2095 log.go:181] (0xc0009cd760) Reply frame received for 1\nI0929 11:40:10.591959 2095 log.go:181] (0xc0009cd760) (0xc000999ea0) Create stream\nI0929 11:40:10.591970 2095 log.go:181] (0xc0009cd760) (0xc000999ea0) Stream added, broadcasting: 3\nI0929 11:40:10.592813 2095 log.go:181] (0xc0009cd760) Reply frame received for 3\nI0929 11:40:10.592906 2095 log.go:181] (0xc0009cd760) (0xc000d2e000) Create stream\nI0929 11:40:10.592919 2095 log.go:181] (0xc0009cd760) (0xc000d2e000) Stream added, broadcasting: 5\nI0929 11:40:10.593758 2095 log.go:181] (0xc0009cd760) Reply frame received for 5\nI0929 11:40:10.647026 2095 log.go:181] (0xc0009cd760) Data frame received for 3\nI0929 11:40:10.647061 2095 log.go:181] (0xc000999ea0) (3) Data frame handling\nI0929 11:40:10.647083 2095 log.go:181] (0xc0009cd760) Data frame received for 5\nI0929 11:40:10.647089 2095 log.go:181] (0xc000d2e000) (5) Data frame handling\nI0929 11:40:10.647099 2095 log.go:181] (0xc000d2e000) (5) Data frame sent\nI0929 11:40:10.647104 2095 log.go:181] (0xc0009cd760) Data frame received for 5\nI0929 11:40:10.647108 2095 log.go:181] (0xc000d2e000) (5) Data frame handling\n+ nc -zv -t -w 2 10.102.25.97 80\nConnection to 10.102.25.97 80 port [tcp/http] succeeded!\nI0929 11:40:10.648340 2095 log.go:181] (0xc0009cd760) Data frame received for 1\nI0929 11:40:10.648389 2095 log.go:181] (0xc0009c4aa0) (1) Data frame handling\nI0929 11:40:10.648517 2095 log.go:181] (0xc0009c4aa0) (1) Data frame sent\nI0929 11:40:10.648557 2095 log.go:181] (0xc0009cd760) (0xc0009c4aa0) Stream removed, broadcasting: 1\nI0929 11:40:10.648586 2095 log.go:181] (0xc0009cd760) Go away received\nI0929 11:40:10.648898 2095 log.go:181] (0xc0009cd760) (0xc0009c4aa0) Stream removed, broadcasting: 1\nI0929 11:40:10.648914 2095 log.go:181] (0xc0009cd760) (0xc000999ea0) Stream removed, broadcasting: 3\nI0929 11:40:10.648921 2095 log.go:181] (0xc0009cd760) (0xc000d2e000) Stream removed, broadcasting: 5\n" Sep 29 11:40:10.654: INFO: stdout: "" Sep 29 11:40:10.654: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-8207 execpod-affinity5lk9b -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.12 32575' Sep 29 11:40:10.853: INFO: stderr: "I0929 11:40:10.785217 2113 log.go:181] (0xc000196dc0) (0xc0000cdae0) Create stream\nI0929 11:40:10.785268 2113 log.go:181] (0xc000196dc0) (0xc0000cdae0) Stream added, broadcasting: 1\nI0929 11:40:10.790459 2113 log.go:181] (0xc000196dc0) Reply frame received for 1\nI0929 11:40:10.790515 2113 log.go:181] (0xc000196dc0) (0xc000e0a000) Create stream\nI0929 11:40:10.790534 2113 log.go:181] (0xc000196dc0) (0xc000e0a000) Stream added, broadcasting: 3\nI0929 11:40:10.791651 2113 log.go:181] (0xc000196dc0) Reply frame received for 3\nI0929 11:40:10.791724 2113 log.go:181] (0xc000196dc0) (0xc00031d9a0) Create stream\nI0929 11:40:10.791762 2113 log.go:181] (0xc000196dc0) (0xc00031d9a0) Stream added, broadcasting: 5\nI0929 11:40:10.792790 2113 log.go:181] (0xc000196dc0) Reply frame received for 5\nI0929 11:40:10.844303 2113 log.go:181] (0xc000196dc0) Data frame received for 5\nI0929 11:40:10.844345 2113 log.go:181] (0xc00031d9a0) (5) Data frame handling\nI0929 11:40:10.844368 2113 log.go:181] (0xc00031d9a0) (5) Data frame sent\nI0929 11:40:10.844380 2113 log.go:181] (0xc000196dc0) Data frame received for 5\nI0929 11:40:10.844390 2113 log.go:181] (0xc00031d9a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.12 32575\nConnection to 172.18.0.12 32575 port [tcp/32575] succeeded!\nI0929 11:40:10.844432 2113 log.go:181] (0xc00031d9a0) (5) Data frame sent\nI0929 11:40:10.845350 2113 log.go:181] (0xc000196dc0) Data frame received for 5\nI0929 11:40:10.845390 2113 log.go:181] (0xc00031d9a0) (5) Data frame handling\nI0929 11:40:10.845443 2113 log.go:181] (0xc000196dc0) Data frame received for 3\nI0929 11:40:10.845463 2113 log.go:181] (0xc000e0a000) (3) Data frame handling\nI0929 11:40:10.847098 2113 log.go:181] (0xc000196dc0) Data frame received for 1\nI0929 11:40:10.847140 2113 log.go:181] (0xc0000cdae0) (1) Data frame handling\nI0929 11:40:10.847177 2113 log.go:181] (0xc0000cdae0) (1) Data frame sent\nI0929 11:40:10.847212 2113 log.go:181] (0xc000196dc0) (0xc0000cdae0) Stream removed, broadcasting: 1\nI0929 11:40:10.847277 2113 log.go:181] (0xc000196dc0) Go away received\nI0929 11:40:10.847878 2113 log.go:181] (0xc000196dc0) (0xc0000cdae0) Stream removed, broadcasting: 1\nI0929 11:40:10.847904 2113 log.go:181] (0xc000196dc0) (0xc000e0a000) Stream removed, broadcasting: 3\nI0929 11:40:10.847923 2113 log.go:181] (0xc000196dc0) (0xc00031d9a0) Stream removed, broadcasting: 5\n" Sep 29 11:40:10.853: INFO: stdout: "" Sep 29 11:40:10.853: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-8207 execpod-affinity5lk9b -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.13 32575' Sep 29 11:40:11.071: INFO: stderr: "I0929 11:40:10.981947 2132 log.go:181] (0xc000facf20) (0xc0004821e0) Create stream\nI0929 11:40:10.982012 2132 log.go:181] (0xc000facf20) (0xc0004821e0) Stream added, broadcasting: 1\nI0929 11:40:10.987151 2132 log.go:181] (0xc000facf20) Reply frame received for 1\nI0929 11:40:10.987211 2132 log.go:181] (0xc000facf20) (0xc000b48000) Create stream\nI0929 11:40:10.987228 2132 log.go:181] (0xc000facf20) (0xc000b48000) Stream added, broadcasting: 3\nI0929 11:40:10.988395 2132 log.go:181] (0xc000facf20) Reply frame received for 3\nI0929 11:40:10.988435 2132 log.go:181] (0xc000facf20) (0xc000482dc0) Create stream\nI0929 11:40:10.988448 2132 log.go:181] (0xc000facf20) (0xc000482dc0) Stream added, broadcasting: 5\nI0929 11:40:10.989520 2132 log.go:181] (0xc000facf20) Reply frame received for 5\nI0929 11:40:11.064346 2132 log.go:181] (0xc000facf20) Data frame received for 3\nI0929 11:40:11.064393 2132 log.go:181] (0xc000b48000) (3) Data frame handling\nI0929 11:40:11.064452 2132 log.go:181] (0xc000facf20) Data frame received for 5\nI0929 11:40:11.064478 2132 log.go:181] (0xc000482dc0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.13 32575\nConnection to 172.18.0.13 32575 port [tcp/32575] succeeded!\nI0929 11:40:11.064631 2132 log.go:181] (0xc000482dc0) (5) Data frame sent\nI0929 11:40:11.064699 2132 log.go:181] (0xc000facf20) Data frame received for 5\nI0929 11:40:11.064749 2132 log.go:181] (0xc000482dc0) (5) Data frame handling\nI0929 11:40:11.065916 2132 log.go:181] (0xc000facf20) Data frame received for 1\nI0929 11:40:11.065936 2132 log.go:181] (0xc0004821e0) (1) Data frame handling\nI0929 11:40:11.065950 2132 log.go:181] (0xc0004821e0) (1) Data frame sent\nI0929 11:40:11.065965 2132 log.go:181] (0xc000facf20) (0xc0004821e0) Stream removed, broadcasting: 1\nI0929 11:40:11.066086 2132 log.go:181] (0xc000facf20) Go away received\nI0929 11:40:11.066432 2132 log.go:181] (0xc000facf20) (0xc0004821e0) Stream removed, broadcasting: 1\nI0929 11:40:11.066471 2132 log.go:181] (0xc000facf20) (0xc000b48000) Stream removed, broadcasting: 3\nI0929 11:40:11.066489 2132 log.go:181] (0xc000facf20) (0xc000482dc0) Stream removed, broadcasting: 5\n" Sep 29 11:40:11.072: INFO: stdout: "" Sep 29 11:40:11.082: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-8207 execpod-affinity5lk9b -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.12:32575/ ; done' Sep 29 11:40:11.405: INFO: stderr: "I0929 11:40:11.220469 2150 log.go:181] (0xc000cc5340) (0xc000acc3c0) Create stream\nI0929 11:40:11.220527 2150 log.go:181] (0xc000cc5340) (0xc000acc3c0) Stream added, broadcasting: 1\nI0929 11:40:11.223501 2150 log.go:181] (0xc000cc5340) Reply frame received for 1\nI0929 11:40:11.223534 2150 log.go:181] (0xc000cc5340) (0xc00064e000) Create stream\nI0929 11:40:11.223545 2150 log.go:181] (0xc000cc5340) (0xc00064e000) Stream added, broadcasting: 3\nI0929 11:40:11.224569 2150 log.go:181] (0xc000cc5340) Reply frame received for 3\nI0929 11:40:11.224627 2150 log.go:181] (0xc000cc5340) (0xc000c641e0) Create stream\nI0929 11:40:11.224653 2150 log.go:181] (0xc000cc5340) (0xc000c641e0) Stream added, broadcasting: 5\nI0929 11:40:11.225783 2150 log.go:181] (0xc000cc5340) Reply frame received for 5\nI0929 11:40:11.289856 2150 log.go:181] (0xc000cc5340) Data frame received for 5\nI0929 11:40:11.290014 2150 log.go:181] (0xc000c641e0) (5) Data frame handling\nI0929 11:40:11.290054 2150 log.go:181] (0xc000c641e0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32575/\nI0929 11:40:11.290424 2150 log.go:181] (0xc000cc5340) Data frame received for 3\nI0929 11:40:11.290460 2150 log.go:181] (0xc00064e000) (3) Data frame handling\nI0929 11:40:11.290485 2150 log.go:181] (0xc00064e000) (3) Data frame sent\nI0929 11:40:11.296542 2150 log.go:181] (0xc000cc5340) Data frame received for 3\nI0929 11:40:11.296567 2150 log.go:181] (0xc00064e000) (3) Data frame handling\nI0929 11:40:11.296580 2150 log.go:181] (0xc00064e000) (3) Data frame sent\nI0929 11:40:11.297066 2150 log.go:181] (0xc000cc5340) Data frame received for 3\nI0929 11:40:11.297095 2150 log.go:181] (0xc00064e000) (3) Data frame handling\nI0929 11:40:11.297114 2150 log.go:181] (0xc00064e000) (3) Data frame sent\nI0929 11:40:11.297128 2150 log.go:181] (0xc000cc5340) Data frame received for 5\nI0929 11:40:11.297136 2150 log.go:181] (0xc000c641e0) (5) Data frame handling\nI0929 11:40:11.297141 2150 log.go:181] (0xc000c641e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32575/\nI0929 11:40:11.303342 2150 log.go:181] (0xc000cc5340) Data frame received for 3\nI0929 11:40:11.303372 2150 log.go:181] (0xc00064e000) (3) Data frame handling\nI0929 11:40:11.303406 2150 log.go:181] (0xc00064e000) (3) Data frame sent\nI0929 11:40:11.303718 2150 log.go:181] (0xc000cc5340) Data frame received for 3\nI0929 11:40:11.303737 2150 log.go:181] (0xc00064e000) (3) Data frame handling\nI0929 11:40:11.303743 2150 log.go:181] (0xc00064e000) (3) Data frame sent\nI0929 11:40:11.303752 2150 log.go:181] (0xc000cc5340) Data frame received for 5\nI0929 11:40:11.303756 2150 log.go:181] (0xc000c641e0) (5) Data frame handling\nI0929 11:40:11.303767 2150 log.go:181] (0xc000c641e0) (5) Data frame sent\nI0929 11:40:11.303772 2150 log.go:181] (0xc000cc5340) Data frame received for 5\nI0929 11:40:11.303776 2150 log.go:181] (0xc000c641e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32575/\nI0929 11:40:11.303786 2150 log.go:181] (0xc000c641e0) (5) Data frame sent\nI0929 11:40:11.308232 2150 log.go:181] (0xc000cc5340) Data frame received for 3\nI0929 11:40:11.308259 2150 log.go:181] (0xc00064e000) (3) Data frame handling\nI0929 11:40:11.308287 2150 log.go:181] (0xc00064e000) (3) Data frame sent\nI0929 11:40:11.308694 2150 log.go:181] (0xc000cc5340) Data frame received for 5\nI0929 11:40:11.308713 2150 log.go:181] (0xc000c641e0) (5) Data frame handling\nI0929 11:40:11.308720 2150 log.go:181] (0xc000c641e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32575/\nI0929 11:40:11.308729 2150 log.go:181] (0xc000cc5340) Data frame received for 3\nI0929 11:40:11.308736 2150 log.go:181] (0xc00064e000) (3) Data frame handling\nI0929 11:40:11.308741 2150 log.go:181] (0xc00064e000) (3) Data frame sent\nI0929 11:40:11.314266 2150 log.go:181] (0xc000cc5340) Data frame received for 3\nI0929 11:40:11.314343 2150 log.go:181] (0xc00064e000) (3) Data frame handling\nI0929 11:40:11.314385 2150 log.go:181] (0xc00064e000) (3) Data frame sent\nI0929 11:40:11.314961 2150 log.go:181] (0xc000cc5340) Data frame received for 5\nI0929 11:40:11.314988 2150 log.go:181] (0xc000c641e0) (5) Data frame handling\nI0929 11:40:11.315041 2150 log.go:181] (0xc000c641e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32575/\nI0929 11:40:11.315064 2150 log.go:181] (0xc000cc5340) Data frame received for 3\nI0929 11:40:11.315077 2150 log.go:181] (0xc00064e000) (3) Data frame handling\nI0929 11:40:11.315109 2150 log.go:181] (0xc00064e000) (3) Data frame sent\nI0929 11:40:11.319823 2150 log.go:181] (0xc000cc5340) Data frame received for 3\nI0929 11:40:11.319848 2150 log.go:181] (0xc00064e000) (3) Data frame handling\nI0929 11:40:11.319862 2150 log.go:181] (0xc00064e000) (3) Data frame sent\nI0929 11:40:11.320361 2150 log.go:181] (0xc000cc5340) Data frame received for 3\nI0929 11:40:11.320391 2150 log.go:181] (0xc00064e000) (3) Data frame handling\nI0929 11:40:11.320412 2150 log.go:181] (0xc00064e000) (3) Data frame sent\nI0929 11:40:11.320430 2150 log.go:181] (0xc000cc5340) Data frame received for 5\nI0929 11:40:11.320441 2150 log.go:181] (0xc000c641e0) (5) Data frame handling\nI0929 11:40:11.320453 2150 log.go:181] (0xc000c641e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32575/\nI0929 11:40:11.327109 2150 log.go:181] (0xc000cc5340) Data frame received for 3\nI0929 11:40:11.327128 2150 log.go:181] (0xc00064e000) (3) Data frame handling\nI0929 11:40:11.327146 2150 log.go:181] (0xc00064e000) (3) Data frame sent\nI0929 11:40:11.327538 2150 log.go:181] (0xc000cc5340) Data frame received for 3\nI0929 11:40:11.327589 2150 log.go:181] (0xc000cc5340) Data frame received for 5\nI0929 11:40:11.327680 2150 log.go:181] (0xc000c641e0) (5) Data frame handling\nI0929 11:40:11.327698 2150 log.go:181] (0xc000c641e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32575/\nI0929 11:40:11.327715 2150 log.go:181] (0xc00064e000) (3) Data frame handling\nI0929 11:40:11.327736 2150 log.go:181] (0xc00064e000) (3) Data frame sent\nI0929 11:40:11.333307 2150 log.go:181] (0xc000cc5340) Data frame received for 3\nI0929 11:40:11.333325 2150 log.go:181] (0xc00064e000) (3) Data frame handling\nI0929 11:40:11.333339 2150 log.go:181] (0xc00064e000) (3) Data frame sent\nI0929 11:40:11.334071 2150 log.go:181] (0xc000cc5340) Data frame received for 3\nI0929 11:40:11.334086 2150 log.go:181] (0xc00064e000) (3) Data frame handling\nI0929 11:40:11.334102 2150 log.go:181] (0xc00064e000) (3) Data frame sent\nI0929 11:40:11.334111 2150 log.go:181] (0xc000cc5340) Data frame received for 5\nI0929 11:40:11.334117 2150 log.go:181] (0xc000c641e0) (5) Data frame handling\nI0929 11:40:11.334123 2150 log.go:181] (0xc000c641e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32575/\nI0929 11:40:11.338902 2150 log.go:181] (0xc000cc5340) Data frame received for 3\nI0929 11:40:11.338945 2150 log.go:181] (0xc00064e000) (3) Data frame handling\nI0929 11:40:11.338980 2150 log.go:181] (0xc00064e000) (3) Data frame sent\nI0929 11:40:11.339795 2150 log.go:181] (0xc000cc5340) Data frame received for 3\nI0929 11:40:11.339809 2150 log.go:181] (0xc00064e000) (3) Data frame handling\nI0929 11:40:11.339816 2150 log.go:181] (0xc00064e000) (3) Data frame sent\nI0929 11:40:11.339836 2150 log.go:181] (0xc000cc5340) Data frame received for 5\nI0929 11:40:11.339867 2150 log.go:181] (0xc000c641e0) (5) Data frame handling\nI0929 11:40:11.339893 2150 log.go:181] (0xc000c641e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32575/\nI0929 11:40:11.347319 2150 log.go:181] (0xc000cc5340) Data frame received for 3\nI0929 11:40:11.347414 2150 log.go:181] (0xc00064e000) (3) Data frame handling\nI0929 11:40:11.347457 2150 log.go:181] (0xc00064e000) (3) Data frame sent\nI0929 11:40:11.348320 2150 log.go:181] (0xc000cc5340) Data frame received for 3\nI0929 11:40:11.348362 2150 log.go:181] (0xc00064e000) (3) Data frame handling\nI0929 11:40:11.348374 2150 log.go:181] (0xc00064e000) (3) Data frame sent\nI0929 11:40:11.348388 2150 log.go:181] (0xc000cc5340) Data frame received for 5\nI0929 11:40:11.348396 2150 log.go:181] (0xc000c641e0) (5) Data frame handling\nI0929 11:40:11.348404 2150 log.go:181] (0xc000c641e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32575/\nI0929 11:40:11.355489 2150 log.go:181] (0xc000cc5340) Data frame received for 3\nI0929 11:40:11.355512 2150 log.go:181] (0xc00064e000) (3) Data frame handling\nI0929 11:40:11.355547 2150 log.go:181] (0xc00064e000) (3) Data frame sent\nI0929 11:40:11.356296 2150 log.go:181] (0xc000cc5340) Data frame received for 3\nI0929 11:40:11.356318 2150 log.go:181] (0xc00064e000) (3) Data frame handling\nI0929 11:40:11.356338 2150 log.go:181] (0xc00064e000) (3) Data frame sent\nI0929 11:40:11.357299 2150 log.go:181] (0xc000cc5340) Data frame received for 5\nI0929 11:40:11.357325 2150 log.go:181] (0xc000c641e0) (5) Data frame handling\nI0929 11:40:11.357340 2150 log.go:181] (0xc000c641e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32575/\nI0929 11:40:11.362157 2150 log.go:181] (0xc000cc5340) Data frame received for 3\nI0929 11:40:11.362189 2150 log.go:181] (0xc00064e000) (3) Data frame handling\nI0929 11:40:11.362273 2150 log.go:181] (0xc00064e000) (3) Data frame sent\nI0929 11:40:11.362947 2150 log.go:181] (0xc000cc5340) Data frame received for 3\nI0929 11:40:11.362982 2150 log.go:181] (0xc00064e000) (3) Data frame handling\nI0929 11:40:11.363000 2150 log.go:181] (0xc00064e000) (3) Data frame sent\nI0929 11:40:11.363023 2150 log.go:181] (0xc000cc5340) Data frame received for 5\nI0929 11:40:11.363048 2150 log.go:181] (0xc000c641e0) (5) Data frame handling\nI0929 11:40:11.363071 2150 log.go:181] (0xc000c641e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32575/\nI0929 11:40:11.369478 2150 log.go:181] (0xc000cc5340) Data frame received for 3\nI0929 11:40:11.369515 2150 log.go:181] (0xc00064e000) (3) Data frame handling\nI0929 11:40:11.369538 2150 log.go:181] (0xc00064e000) (3) Data frame sent\nI0929 11:40:11.370382 2150 log.go:181] (0xc000cc5340) Data frame received for 5\nI0929 11:40:11.370416 2150 log.go:181] (0xc000c641e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32575/\nI0929 11:40:11.370447 2150 log.go:181] (0xc000cc5340) Data frame received for 3\nI0929 11:40:11.370473 2150 log.go:181] (0xc00064e000) (3) Data frame handling\nI0929 11:40:11.370490 2150 log.go:181] (0xc00064e000) (3) Data frame sent\nI0929 11:40:11.370506 2150 log.go:181] (0xc000c641e0) (5) Data frame sent\nI0929 11:40:11.376183 2150 log.go:181] (0xc000cc5340) Data frame received for 3\nI0929 11:40:11.376213 2150 log.go:181] (0xc00064e000) (3) Data frame handling\nI0929 11:40:11.376232 2150 log.go:181] (0xc00064e000) (3) Data frame sent\nI0929 11:40:11.376720 2150 log.go:181] (0xc000cc5340) Data frame received for 5\nI0929 11:40:11.376747 2150 log.go:181] (0xc000c641e0) (5) Data frame handling\nI0929 11:40:11.376770 2150 log.go:181] (0xc000c641e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32575/\nI0929 11:40:11.377323 2150 log.go:181] (0xc000cc5340) Data frame received for 3\nI0929 11:40:11.377343 2150 log.go:181] (0xc00064e000) (3) Data frame handling\nI0929 11:40:11.377367 2150 log.go:181] (0xc00064e000) (3) Data frame sent\nI0929 11:40:11.382533 2150 log.go:181] (0xc000cc5340) Data frame received for 3\nI0929 11:40:11.382550 2150 log.go:181] (0xc00064e000) (3) Data frame handling\nI0929 11:40:11.382559 2150 log.go:181] (0xc00064e000) (3) Data frame sent\nI0929 11:40:11.383065 2150 log.go:181] (0xc000cc5340) Data frame received for 3\nI0929 11:40:11.383121 2150 log.go:181] (0xc00064e000) (3) Data frame handling\nI0929 11:40:11.383145 2150 log.go:181] (0xc00064e000) (3) Data frame sent\nI0929 11:40:11.383164 2150 log.go:181] (0xc000cc5340) Data frame received for 5\nI0929 11:40:11.383183 2150 log.go:181] (0xc000c641e0) (5) Data frame handling\nI0929 11:40:11.383213 2150 log.go:181] (0xc000c641e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32575/\nI0929 11:40:11.389157 2150 log.go:181] (0xc000cc5340) Data frame received for 3\nI0929 11:40:11.389191 2150 log.go:181] (0xc00064e000) (3) Data frame handling\nI0929 11:40:11.389215 2150 log.go:181] (0xc00064e000) (3) Data frame sent\nI0929 11:40:11.389485 2150 log.go:181] (0xc000cc5340) Data frame received for 3\nI0929 11:40:11.389502 2150 log.go:181] (0xc00064e000) (3) Data frame handling\nI0929 11:40:11.389525 2150 log.go:181] (0xc000cc5340) Data frame received for 5\nI0929 11:40:11.389549 2150 log.go:181] (0xc000c641e0) (5) Data frame handling\nI0929 11:40:11.389562 2150 log.go:181] (0xc000c641e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32575/\nI0929 11:40:11.389590 2150 log.go:181] (0xc00064e000) (3) Data frame sent\nI0929 11:40:11.396017 2150 log.go:181] (0xc000cc5340) Data frame received for 3\nI0929 11:40:11.396047 2150 log.go:181] (0xc00064e000) (3) Data frame handling\nI0929 11:40:11.396068 2150 log.go:181] (0xc00064e000) (3) Data frame sent\nI0929 11:40:11.397320 2150 log.go:181] (0xc000cc5340) Data frame received for 3\nI0929 11:40:11.397332 2150 log.go:181] (0xc00064e000) (3) Data frame handling\nI0929 11:40:11.397707 2150 log.go:181] (0xc000cc5340) Data frame received for 5\nI0929 11:40:11.397723 2150 log.go:181] (0xc000c641e0) (5) Data frame handling\nI0929 11:40:11.400096 2150 log.go:181] (0xc000cc5340) Data frame received for 1\nI0929 11:40:11.400134 2150 log.go:181] (0xc000acc3c0) (1) Data frame handling\nI0929 11:40:11.400150 2150 log.go:181] (0xc000acc3c0) (1) Data frame sent\nI0929 11:40:11.400171 2150 log.go:181] (0xc000cc5340) (0xc000acc3c0) Stream removed, broadcasting: 1\nI0929 11:40:11.400558 2150 log.go:181] (0xc000cc5340) (0xc000acc3c0) Stream removed, broadcasting: 1\nI0929 11:40:11.400578 2150 log.go:181] (0xc000cc5340) (0xc00064e000) Stream removed, broadcasting: 3\nI0929 11:40:11.400743 2150 log.go:181] (0xc000cc5340) (0xc000c641e0) Stream removed, broadcasting: 5\nI0929 11:40:11.401023 2150 log.go:181] (0xc000cc5340) Go away received\n" Sep 29 11:40:11.405: INFO: stdout: "\naffinity-nodeport-transition-w4pph\naffinity-nodeport-transition-ks7s7\naffinity-nodeport-transition-26n82\naffinity-nodeport-transition-26n82\naffinity-nodeport-transition-w4pph\naffinity-nodeport-transition-26n82\naffinity-nodeport-transition-ks7s7\naffinity-nodeport-transition-ks7s7\naffinity-nodeport-transition-w4pph\naffinity-nodeport-transition-26n82\naffinity-nodeport-transition-ks7s7\naffinity-nodeport-transition-w4pph\naffinity-nodeport-transition-ks7s7\naffinity-nodeport-transition-w4pph\naffinity-nodeport-transition-ks7s7\naffinity-nodeport-transition-26n82" Sep 29 11:40:11.405: INFO: Received response from host: affinity-nodeport-transition-w4pph Sep 29 11:40:11.405: INFO: Received response from host: affinity-nodeport-transition-ks7s7 Sep 29 11:40:11.405: INFO: Received response from host: affinity-nodeport-transition-26n82 Sep 29 11:40:11.406: INFO: Received response from host: affinity-nodeport-transition-26n82 Sep 29 11:40:11.406: INFO: Received response from host: affinity-nodeport-transition-w4pph Sep 29 11:40:11.406: INFO: Received response from host: affinity-nodeport-transition-26n82 Sep 29 11:40:11.406: INFO: Received response from host: affinity-nodeport-transition-ks7s7 Sep 29 11:40:11.406: INFO: Received response from host: affinity-nodeport-transition-ks7s7 Sep 29 11:40:11.406: INFO: Received response from host: affinity-nodeport-transition-w4pph Sep 29 11:40:11.406: INFO: Received response from host: affinity-nodeport-transition-26n82 Sep 29 11:40:11.406: INFO: Received response from host: affinity-nodeport-transition-ks7s7 Sep 29 11:40:11.406: INFO: Received response from host: affinity-nodeport-transition-w4pph Sep 29 11:40:11.406: INFO: Received response from host: affinity-nodeport-transition-ks7s7 Sep 29 11:40:11.406: INFO: Received response from host: affinity-nodeport-transition-w4pph Sep 29 11:40:11.406: INFO: Received response from host: affinity-nodeport-transition-ks7s7 Sep 29 11:40:11.406: INFO: Received response from host: affinity-nodeport-transition-26n82 Sep 29 11:40:11.413: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-8207 execpod-affinity5lk9b -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.12:32575/ ; done' Sep 29 11:40:11.713: INFO: stderr: "I0929 11:40:11.549380 2168 log.go:181] (0xc0002580b0) (0xc000a025a0) Create stream\nI0929 11:40:11.549458 2168 log.go:181] (0xc0002580b0) (0xc000a025a0) Stream added, broadcasting: 1\nI0929 11:40:11.551616 2168 log.go:181] (0xc0002580b0) Reply frame received for 1\nI0929 11:40:11.551673 2168 log.go:181] (0xc0002580b0) (0xc000a17ae0) Create stream\nI0929 11:40:11.551690 2168 log.go:181] (0xc0002580b0) (0xc000a17ae0) Stream added, broadcasting: 3\nI0929 11:40:11.552550 2168 log.go:181] (0xc0002580b0) Reply frame received for 3\nI0929 11:40:11.552593 2168 log.go:181] (0xc0002580b0) (0xc000392000) Create stream\nI0929 11:40:11.552604 2168 log.go:181] (0xc0002580b0) (0xc000392000) Stream added, broadcasting: 5\nI0929 11:40:11.553618 2168 log.go:181] (0xc0002580b0) Reply frame received for 5\nI0929 11:40:11.622388 2168 log.go:181] (0xc0002580b0) Data frame received for 3\nI0929 11:40:11.622428 2168 log.go:181] (0xc000a17ae0) (3) Data frame handling\nI0929 11:40:11.622443 2168 log.go:181] (0xc000a17ae0) (3) Data frame sent\nI0929 11:40:11.622475 2168 log.go:181] (0xc0002580b0) Data frame received for 5\nI0929 11:40:11.622485 2168 log.go:181] (0xc000392000) (5) Data frame handling\nI0929 11:40:11.622503 2168 log.go:181] (0xc000392000) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32575/\nI0929 11:40:11.628777 2168 log.go:181] (0xc0002580b0) Data frame received for 3\nI0929 11:40:11.628903 2168 log.go:181] (0xc000a17ae0) (3) Data frame handling\nI0929 11:40:11.628937 2168 log.go:181] (0xc000a17ae0) (3) Data frame sent\nI0929 11:40:11.629267 2168 log.go:181] (0xc0002580b0) Data frame received for 3\nI0929 11:40:11.629296 2168 log.go:181] (0xc000a17ae0) (3) Data frame handling\nI0929 11:40:11.629307 2168 log.go:181] (0xc000a17ae0) (3) Data frame sent\nI0929 11:40:11.629324 2168 log.go:181] (0xc0002580b0) Data frame received for 5\nI0929 11:40:11.629356 2168 log.go:181] (0xc000392000) (5) Data frame handling\nI0929 11:40:11.629381 2168 log.go:181] (0xc000392000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32575/\nI0929 11:40:11.636491 2168 log.go:181] (0xc0002580b0) Data frame received for 3\nI0929 11:40:11.636513 2168 log.go:181] (0xc000a17ae0) (3) Data frame handling\nI0929 11:40:11.636528 2168 log.go:181] (0xc000a17ae0) (3) Data frame sent\nI0929 11:40:11.637515 2168 log.go:181] (0xc0002580b0) Data frame received for 3\nI0929 11:40:11.637548 2168 log.go:181] (0xc000a17ae0) (3) Data frame handling\nI0929 11:40:11.637562 2168 log.go:181] (0xc000a17ae0) (3) Data frame sent\nI0929 11:40:11.637584 2168 log.go:181] (0xc0002580b0) Data frame received for 5\nI0929 11:40:11.637595 2168 log.go:181] (0xc000392000) (5) Data frame handling\nI0929 11:40:11.637607 2168 log.go:181] (0xc000392000) (5) Data frame sent\nI0929 11:40:11.637619 2168 log.go:181] (0xc0002580b0) Data frame received for 5\nI0929 11:40:11.637634 2168 log.go:181] (0xc000392000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32575/\nI0929 11:40:11.637658 2168 log.go:181] (0xc000392000) (5) Data frame sent\nI0929 11:40:11.641712 2168 log.go:181] (0xc0002580b0) Data frame received for 3\nI0929 11:40:11.641725 2168 log.go:181] (0xc000a17ae0) (3) Data frame handling\nI0929 11:40:11.641731 2168 log.go:181] (0xc000a17ae0) (3) Data frame sent\nI0929 11:40:11.642587 2168 log.go:181] (0xc0002580b0) Data frame received for 5\nI0929 11:40:11.642611 2168 log.go:181] (0xc000392000) (5) Data frame handling\nI0929 11:40:11.642622 2168 log.go:181] (0xc000392000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32575/\nI0929 11:40:11.642637 2168 log.go:181] (0xc0002580b0) Data frame received for 3\nI0929 11:40:11.642645 2168 log.go:181] (0xc000a17ae0) (3) Data frame handling\nI0929 11:40:11.642654 2168 log.go:181] (0xc000a17ae0) (3) Data frame sent\nI0929 11:40:11.645758 2168 log.go:181] (0xc0002580b0) Data frame received for 3\nI0929 11:40:11.645775 2168 log.go:181] (0xc000a17ae0) (3) Data frame handling\nI0929 11:40:11.645792 2168 log.go:181] (0xc000a17ae0) (3) Data frame sent\nI0929 11:40:11.646634 2168 log.go:181] (0xc0002580b0) Data frame received for 3\nI0929 11:40:11.646669 2168 log.go:181] (0xc000a17ae0) (3) Data frame handling\nI0929 11:40:11.646684 2168 log.go:181] (0xc000a17ae0) (3) Data frame sent\nI0929 11:40:11.646703 2168 log.go:181] (0xc0002580b0) Data frame received for 5\nI0929 11:40:11.646714 2168 log.go:181] (0xc000392000) (5) Data frame handling\nI0929 11:40:11.646725 2168 log.go:181] (0xc000392000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32575/\nI0929 11:40:11.653318 2168 log.go:181] (0xc0002580b0) Data frame received for 3\nI0929 11:40:11.653334 2168 log.go:181] (0xc000a17ae0) (3) Data frame handling\nI0929 11:40:11.653342 2168 log.go:181] (0xc000a17ae0) (3) Data frame sent\nI0929 11:40:11.653934 2168 log.go:181] (0xc0002580b0) Data frame received for 3\nI0929 11:40:11.653957 2168 log.go:181] (0xc000a17ae0) (3) Data frame handling\nI0929 11:40:11.653973 2168 log.go:181] (0xc000a17ae0) (3) Data frame sent\nI0929 11:40:11.653987 2168 log.go:181] (0xc0002580b0) Data frame received for 5\nI0929 11:40:11.654001 2168 log.go:181] (0xc000392000) (5) Data frame handling\nI0929 11:40:11.654018 2168 log.go:181] (0xc000392000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32575/\nI0929 11:40:11.658551 2168 log.go:181] (0xc0002580b0) Data frame received for 3\nI0929 11:40:11.658580 2168 log.go:181] (0xc000a17ae0) (3) Data frame handling\nI0929 11:40:11.658610 2168 log.go:181] (0xc000a17ae0) (3) Data frame sent\nI0929 11:40:11.659191 2168 log.go:181] (0xc0002580b0) Data frame received for 3\nI0929 11:40:11.659234 2168 log.go:181] (0xc000a17ae0) (3) Data frame handling\nI0929 11:40:11.659244 2168 log.go:181] (0xc000a17ae0) (3) Data frame sent\nI0929 11:40:11.659256 2168 log.go:181] (0xc0002580b0) Data frame received for 5\nI0929 11:40:11.659262 2168 log.go:181] (0xc000392000) (5) Data frame handling\nI0929 11:40:11.659268 2168 log.go:181] (0xc000392000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32575/\nI0929 11:40:11.666598 2168 log.go:181] (0xc0002580b0) Data frame received for 3\nI0929 11:40:11.666620 2168 log.go:181] (0xc000a17ae0) (3) Data frame handling\nI0929 11:40:11.666637 2168 log.go:181] (0xc000a17ae0) (3) Data frame sent\nI0929 11:40:11.667286 2168 log.go:181] (0xc0002580b0) Data frame received for 3\nI0929 11:40:11.667315 2168 log.go:181] (0xc0002580b0) Data frame received for 5\nI0929 11:40:11.667356 2168 log.go:181] (0xc000392000) (5) Data frame handling\nI0929 11:40:11.667383 2168 log.go:181] (0xc000392000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32575/\nI0929 11:40:11.667403 2168 log.go:181] (0xc000a17ae0) (3) Data frame handling\nI0929 11:40:11.667427 2168 log.go:181] (0xc000a17ae0) (3) Data frame sent\nI0929 11:40:11.675435 2168 log.go:181] (0xc0002580b0) Data frame received for 3\nI0929 11:40:11.675462 2168 log.go:181] (0xc000a17ae0) (3) Data frame handling\nI0929 11:40:11.675486 2168 log.go:181] (0xc000a17ae0) (3) Data frame sent\nI0929 11:40:11.675868 2168 log.go:181] (0xc0002580b0) Data frame received for 3\nI0929 11:40:11.675885 2168 log.go:181] (0xc000a17ae0) (3) Data frame handling\nI0929 11:40:11.675894 2168 log.go:181] (0xc000a17ae0) (3) Data frame sent\nI0929 11:40:11.675911 2168 log.go:181] (0xc0002580b0) Data frame received for 5\nI0929 11:40:11.675919 2168 log.go:181] (0xc000392000) (5) Data frame handling\nI0929 11:40:11.675927 2168 log.go:181] (0xc000392000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32575/\nI0929 11:40:11.679488 2168 log.go:181] (0xc0002580b0) Data frame received for 3\nI0929 11:40:11.679573 2168 log.go:181] (0xc000a17ae0) (3) Data frame handling\nI0929 11:40:11.679611 2168 log.go:181] (0xc000a17ae0) (3) Data frame sent\nI0929 11:40:11.679755 2168 log.go:181] (0xc0002580b0) Data frame received for 5\nI0929 11:40:11.679793 2168 log.go:181] (0xc000392000) (5) Data frame handling\nI0929 11:40:11.679810 2168 log.go:181] (0xc000392000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32575/\nI0929 11:40:11.679829 2168 log.go:181] (0xc0002580b0) Data frame received for 3\nI0929 11:40:11.679846 2168 log.go:181] (0xc000a17ae0) (3) Data frame handling\nI0929 11:40:11.679857 2168 log.go:181] (0xc000a17ae0) (3) Data frame sent\nI0929 11:40:11.682953 2168 log.go:181] (0xc0002580b0) Data frame received for 3\nI0929 11:40:11.682969 2168 log.go:181] (0xc000a17ae0) (3) Data frame handling\nI0929 11:40:11.682982 2168 log.go:181] (0xc000a17ae0) (3) Data frame sent\nI0929 11:40:11.683332 2168 log.go:181] (0xc0002580b0) Data frame received for 5\nI0929 11:40:11.683361 2168 log.go:181] (0xc000392000) (5) Data frame handling\nI0929 11:40:11.683406 2168 log.go:181] (0xc000392000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32575/\nI0929 11:40:11.683425 2168 log.go:181] (0xc0002580b0) Data frame received for 3\nI0929 11:40:11.683432 2168 log.go:181] (0xc000a17ae0) (3) Data frame handling\nI0929 11:40:11.683445 2168 log.go:181] (0xc000a17ae0) (3) Data frame sent\nI0929 11:40:11.686232 2168 log.go:181] (0xc0002580b0) Data frame received for 3\nI0929 11:40:11.686248 2168 log.go:181] (0xc000a17ae0) (3) Data frame handling\nI0929 11:40:11.686256 2168 log.go:181] (0xc000a17ae0) (3) Data frame sent\nI0929 11:40:11.686667 2168 log.go:181] (0xc0002580b0) Data frame received for 3\nI0929 11:40:11.686687 2168 log.go:181] (0xc000a17ae0) (3) Data frame handling\nI0929 11:40:11.686714 2168 log.go:181] (0xc000a17ae0) (3) Data frame sent\nI0929 11:40:11.686730 2168 log.go:181] (0xc0002580b0) Data frame received for 5\nI0929 11:40:11.686735 2168 log.go:181] (0xc000392000) (5) Data frame handling\nI0929 11:40:11.686741 2168 log.go:181] (0xc000392000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32575/\nI0929 11:40:11.689552 2168 log.go:181] (0xc0002580b0) Data frame received for 3\nI0929 11:40:11.689572 2168 log.go:181] (0xc000a17ae0) (3) Data frame handling\nI0929 11:40:11.689617 2168 log.go:181] (0xc000a17ae0) (3) Data frame sent\nI0929 11:40:11.689881 2168 log.go:181] (0xc0002580b0) Data frame received for 5\nI0929 11:40:11.689897 2168 log.go:181] (0xc000392000) (5) Data frame handling\nI0929 11:40:11.689911 2168 log.go:181] (0xc000392000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32575/\nI0929 11:40:11.689967 2168 log.go:181] (0xc0002580b0) Data frame received for 3\nI0929 11:40:11.689989 2168 log.go:181] (0xc000a17ae0) (3) Data frame handling\nI0929 11:40:11.689999 2168 log.go:181] (0xc000a17ae0) (3) Data frame sent\nI0929 11:40:11.694122 2168 log.go:181] (0xc0002580b0) Data frame received for 3\nI0929 11:40:11.694140 2168 log.go:181] (0xc000a17ae0) (3) Data frame handling\nI0929 11:40:11.694154 2168 log.go:181] (0xc000a17ae0) (3) Data frame sent\nI0929 11:40:11.694595 2168 log.go:181] (0xc0002580b0) Data frame received for 5\nI0929 11:40:11.694623 2168 log.go:181] (0xc000392000) (5) Data frame handling\nI0929 11:40:11.694634 2168 log.go:181] (0xc000392000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32575/\nI0929 11:40:11.694648 2168 log.go:181] (0xc0002580b0) Data frame received for 3\nI0929 11:40:11.694655 2168 log.go:181] (0xc000a17ae0) (3) Data frame handling\nI0929 11:40:11.694662 2168 log.go:181] (0xc000a17ae0) (3) Data frame sent\nI0929 11:40:11.697801 2168 log.go:181] (0xc0002580b0) Data frame received for 3\nI0929 11:40:11.697816 2168 log.go:181] (0xc000a17ae0) (3) Data frame handling\nI0929 11:40:11.697827 2168 log.go:181] (0xc000a17ae0) (3) Data frame sent\nI0929 11:40:11.698215 2168 log.go:181] (0xc0002580b0) Data frame received for 5\nI0929 11:40:11.698237 2168 log.go:181] (0xc000392000) (5) Data frame handling\nI0929 11:40:11.698247 2168 log.go:181] (0xc000392000) (5) Data frame sent\nI0929 11:40:11.698255 2168 log.go:181] (0xc0002580b0) Data frame received for 5\nI0929 11:40:11.698262 2168 log.go:181] (0xc000392000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32575/\nI0929 11:40:11.698281 2168 log.go:181] (0xc000392000) (5) Data frame sent\nI0929 11:40:11.698293 2168 log.go:181] (0xc0002580b0) Data frame received for 3\nI0929 11:40:11.698301 2168 log.go:181] (0xc000a17ae0) (3) Data frame handling\nI0929 11:40:11.698310 2168 log.go:181] (0xc000a17ae0) (3) Data frame sent\nI0929 11:40:11.702282 2168 log.go:181] (0xc0002580b0) Data frame received for 3\nI0929 11:40:11.702293 2168 log.go:181] (0xc000a17ae0) (3) Data frame handling\nI0929 11:40:11.702299 2168 log.go:181] (0xc000a17ae0) (3) Data frame sent\nI0929 11:40:11.702824 2168 log.go:181] (0xc0002580b0) Data frame received for 3\nI0929 11:40:11.702852 2168 log.go:181] (0xc000a17ae0) (3) Data frame handling\nI0929 11:40:11.702866 2168 log.go:181] (0xc000a17ae0) (3) Data frame sent\nI0929 11:40:11.702884 2168 log.go:181] (0xc0002580b0) Data frame received for 5\nI0929 11:40:11.702894 2168 log.go:181] (0xc000392000) (5) Data frame handling\nI0929 11:40:11.702904 2168 log.go:181] (0xc000392000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32575/I0929 11:40:11.702920 2168 log.go:181] (0xc0002580b0) Data frame received for 5\nI0929 11:40:11.702943 2168 log.go:181] (0xc000392000) (5) Data frame handling\nI0929 11:40:11.702956 2168 log.go:181] (0xc000392000) (5) Data frame sent\n\nI0929 11:40:11.707036 2168 log.go:181] (0xc0002580b0) Data frame received for 3\nI0929 11:40:11.707071 2168 log.go:181] (0xc000a17ae0) (3) Data frame handling\nI0929 11:40:11.707089 2168 log.go:181] (0xc000a17ae0) (3) Data frame sent\nI0929 11:40:11.707592 2168 log.go:181] (0xc0002580b0) Data frame received for 3\nI0929 11:40:11.707610 2168 log.go:181] (0xc000a17ae0) (3) Data frame handling\nI0929 11:40:11.707752 2168 log.go:181] (0xc0002580b0) Data frame received for 5\nI0929 11:40:11.707772 2168 log.go:181] (0xc000392000) (5) Data frame handling\nI0929 11:40:11.709520 2168 log.go:181] (0xc0002580b0) Data frame received for 1\nI0929 11:40:11.709540 2168 log.go:181] (0xc000a025a0) (1) Data frame handling\nI0929 11:40:11.709550 2168 log.go:181] (0xc000a025a0) (1) Data frame sent\nI0929 11:40:11.709568 2168 log.go:181] (0xc0002580b0) (0xc000a025a0) Stream removed, broadcasting: 1\nI0929 11:40:11.709593 2168 log.go:181] (0xc0002580b0) Go away received\nI0929 11:40:11.709838 2168 log.go:181] (0xc0002580b0) (0xc000a025a0) Stream removed, broadcasting: 1\nI0929 11:40:11.709851 2168 log.go:181] (0xc0002580b0) (0xc000a17ae0) Stream removed, broadcasting: 3\nI0929 11:40:11.709857 2168 log.go:181] (0xc0002580b0) (0xc000392000) Stream removed, broadcasting: 5\n" Sep 29 11:40:11.714: INFO: stdout: "\naffinity-nodeport-transition-w4pph\naffinity-nodeport-transition-w4pph\naffinity-nodeport-transition-w4pph\naffinity-nodeport-transition-w4pph\naffinity-nodeport-transition-w4pph\naffinity-nodeport-transition-w4pph\naffinity-nodeport-transition-w4pph\naffinity-nodeport-transition-w4pph\naffinity-nodeport-transition-w4pph\naffinity-nodeport-transition-w4pph\naffinity-nodeport-transition-w4pph\naffinity-nodeport-transition-w4pph\naffinity-nodeport-transition-w4pph\naffinity-nodeport-transition-w4pph\naffinity-nodeport-transition-w4pph\naffinity-nodeport-transition-w4pph" Sep 29 11:40:11.714: INFO: Received response from host: affinity-nodeport-transition-w4pph Sep 29 11:40:11.714: INFO: Received response from host: affinity-nodeport-transition-w4pph Sep 29 11:40:11.714: INFO: Received response from host: affinity-nodeport-transition-w4pph Sep 29 11:40:11.714: INFO: Received response from host: affinity-nodeport-transition-w4pph Sep 29 11:40:11.714: INFO: Received response from host: affinity-nodeport-transition-w4pph Sep 29 11:40:11.714: INFO: Received response from host: affinity-nodeport-transition-w4pph Sep 29 11:40:11.714: INFO: Received response from host: affinity-nodeport-transition-w4pph Sep 29 11:40:11.714: INFO: Received response from host: affinity-nodeport-transition-w4pph Sep 29 11:40:11.714: INFO: Received response from host: affinity-nodeport-transition-w4pph Sep 29 11:40:11.714: INFO: Received response from host: affinity-nodeport-transition-w4pph Sep 29 11:40:11.714: INFO: Received response from host: affinity-nodeport-transition-w4pph Sep 29 11:40:11.714: INFO: Received response from host: affinity-nodeport-transition-w4pph Sep 29 11:40:11.714: INFO: Received response from host: affinity-nodeport-transition-w4pph Sep 29 11:40:11.714: INFO: Received response from host: affinity-nodeport-transition-w4pph Sep 29 11:40:11.714: INFO: Received response from host: affinity-nodeport-transition-w4pph Sep 29 11:40:11.714: INFO: Received response from host: affinity-nodeport-transition-w4pph Sep 29 11:40:11.714: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-8207, will wait for the garbage collector to delete the pods Sep 29 11:40:11.823: INFO: Deleting ReplicationController affinity-nodeport-transition took: 13.019174ms Sep 29 11:40:12.223: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 400.231074ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:40:28.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8207" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:29.736 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":303,"completed":195,"skipped":3166,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:40:28.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-secret-7n5p STEP: Creating a pod to test atomic-volume-subpath Sep 29 11:40:28.864: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-7n5p" in namespace "subpath-8513" to be "Succeeded or Failed" Sep 29 11:40:28.881: INFO: Pod "pod-subpath-test-secret-7n5p": Phase="Pending", Reason="", readiness=false. Elapsed: 16.568434ms Sep 29 11:40:30.885: INFO: Pod "pod-subpath-test-secret-7n5p": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020780461s Sep 29 11:40:32.889: INFO: Pod "pod-subpath-test-secret-7n5p": Phase="Running", Reason="", readiness=true. Elapsed: 4.025023179s Sep 29 11:40:34.893: INFO: Pod "pod-subpath-test-secret-7n5p": Phase="Running", Reason="", readiness=true. Elapsed: 6.029419977s Sep 29 11:40:36.898: INFO: Pod "pod-subpath-test-secret-7n5p": Phase="Running", Reason="", readiness=true. Elapsed: 8.033814955s Sep 29 11:40:38.902: INFO: Pod "pod-subpath-test-secret-7n5p": Phase="Running", Reason="", readiness=true. Elapsed: 10.038322944s Sep 29 11:40:40.906: INFO: Pod "pod-subpath-test-secret-7n5p": Phase="Running", Reason="", readiness=true. Elapsed: 12.041907718s Sep 29 11:40:42.910: INFO: Pod "pod-subpath-test-secret-7n5p": Phase="Running", Reason="", readiness=true. Elapsed: 14.046423386s Sep 29 11:40:44.916: INFO: Pod "pod-subpath-test-secret-7n5p": Phase="Running", Reason="", readiness=true. Elapsed: 16.051661695s Sep 29 11:40:46.920: INFO: Pod "pod-subpath-test-secret-7n5p": Phase="Running", Reason="", readiness=true. Elapsed: 18.05592742s Sep 29 11:40:48.925: INFO: Pod "pod-subpath-test-secret-7n5p": Phase="Running", Reason="", readiness=true. Elapsed: 20.060586264s Sep 29 11:40:50.929: INFO: Pod "pod-subpath-test-secret-7n5p": Phase="Running", Reason="", readiness=true. Elapsed: 22.064607716s Sep 29 11:40:52.933: INFO: Pod "pod-subpath-test-secret-7n5p": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.068848908s STEP: Saw pod success Sep 29 11:40:52.933: INFO: Pod "pod-subpath-test-secret-7n5p" satisfied condition "Succeeded or Failed" Sep 29 11:40:52.936: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-secret-7n5p container test-container-subpath-secret-7n5p: STEP: delete the pod Sep 29 11:40:52.979: INFO: Waiting for pod pod-subpath-test-secret-7n5p to disappear Sep 29 11:40:52.990: INFO: Pod pod-subpath-test-secret-7n5p no longer exists STEP: Deleting pod pod-subpath-test-secret-7n5p Sep 29 11:40:52.990: INFO: Deleting pod "pod-subpath-test-secret-7n5p" in namespace "subpath-8513" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:40:52.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8513" for this suite. • [SLOW TEST:24.218 seconds] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":303,"completed":196,"skipped":3197,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:40:53.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on tmpfs Sep 29 11:40:53.064: INFO: Waiting up to 5m0s for pod "pod-c94789e7-17ff-4601-883d-57ccd69b5ca9" in namespace "emptydir-325" to be "Succeeded or Failed" Sep 29 11:40:53.080: INFO: Pod "pod-c94789e7-17ff-4601-883d-57ccd69b5ca9": Phase="Pending", Reason="", readiness=false. Elapsed: 15.674801ms Sep 29 11:40:55.085: INFO: Pod "pod-c94789e7-17ff-4601-883d-57ccd69b5ca9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020437692s Sep 29 11:40:57.089: INFO: Pod "pod-c94789e7-17ff-4601-883d-57ccd69b5ca9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0251133s STEP: Saw pod success Sep 29 11:40:57.089: INFO: Pod "pod-c94789e7-17ff-4601-883d-57ccd69b5ca9" satisfied condition "Succeeded or Failed" Sep 29 11:40:57.092: INFO: Trying to get logs from node kali-worker pod pod-c94789e7-17ff-4601-883d-57ccd69b5ca9 container test-container: STEP: delete the pod Sep 29 11:40:57.202: INFO: Waiting for pod pod-c94789e7-17ff-4601-883d-57ccd69b5ca9 to disappear Sep 29 11:40:57.214: INFO: Pod pod-c94789e7-17ff-4601-883d-57ccd69b5ca9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:40:57.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-325" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":197,"skipped":3207,"failed":0} ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:40:57.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Sep 29 11:40:57.301: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Sep 29 11:41:09.074: INFO: >>> kubeConfig: /root/.kube/config Sep 29 11:41:11.078: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:41:21.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6149" for this suite. • [SLOW TEST:24.612 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":303,"completed":198,"skipped":3207,"failed":0} S ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:41:21.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should release no longer matching pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Sep 29 11:41:22.023: INFO: Pod name pod-release: Found 0 pods out of 1 Sep 29 11:41:27.027: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:41:27.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8702" for this suite. • [SLOW TEST:5.318 seconds] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":303,"completed":199,"skipped":3208,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:41:27.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-5874/configmap-test-8e4491e8-ddc9-4159-9788-e340598d3cc0 STEP: Creating a pod to test consume configMaps Sep 29 11:41:27.284: INFO: Waiting up to 5m0s for pod "pod-configmaps-bb908670-786c-4126-9d2d-d9e355b45b09" in namespace "configmap-5874" to be "Succeeded or Failed" Sep 29 11:41:27.305: INFO: Pod "pod-configmaps-bb908670-786c-4126-9d2d-d9e355b45b09": Phase="Pending", Reason="", readiness=false. Elapsed: 21.512859ms Sep 29 11:41:29.444: INFO: Pod "pod-configmaps-bb908670-786c-4126-9d2d-d9e355b45b09": Phase="Pending", Reason="", readiness=false. Elapsed: 2.160689717s Sep 29 11:41:31.448: INFO: Pod "pod-configmaps-bb908670-786c-4126-9d2d-d9e355b45b09": Phase="Pending", Reason="", readiness=false. Elapsed: 4.164942385s Sep 29 11:41:33.453: INFO: Pod "pod-configmaps-bb908670-786c-4126-9d2d-d9e355b45b09": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.169905973s STEP: Saw pod success Sep 29 11:41:33.454: INFO: Pod "pod-configmaps-bb908670-786c-4126-9d2d-d9e355b45b09" satisfied condition "Succeeded or Failed" Sep 29 11:41:33.456: INFO: Trying to get logs from node kali-worker pod pod-configmaps-bb908670-786c-4126-9d2d-d9e355b45b09 container env-test: STEP: delete the pod Sep 29 11:41:33.477: INFO: Waiting for pod pod-configmaps-bb908670-786c-4126-9d2d-d9e355b45b09 to disappear Sep 29 11:41:33.534: INFO: Pod pod-configmaps-bb908670-786c-4126-9d2d-d9e355b45b09 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:41:33.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5874" for this suite. • [SLOW TEST:6.387 seconds] [sig-node] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34 should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":303,"completed":200,"skipped":3225,"failed":0} [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:41:33.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token STEP: reading a file in the container Sep 29 11:41:38.150: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5050 pod-service-account-6cb2aad6-8ebe-4061-a4e1-2900ca42829d -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Sep 29 11:41:38.389: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5050 pod-service-account-6cb2aad6-8ebe-4061-a4e1-2900ca42829d -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Sep 29 11:41:38.604: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5050 pod-service-account-6cb2aad6-8ebe-4061-a4e1-2900ca42829d -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:41:38.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-5050" for this suite. • [SLOW TEST:5.260 seconds] [sig-auth] ServiceAccounts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":303,"completed":201,"skipped":3225,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:41:38.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's command Sep 29 11:41:38.930: INFO: Waiting up to 5m0s for pod "var-expansion-6c95befa-85b7-4e70-8e74-9fb1903760f8" in namespace "var-expansion-6615" to be "Succeeded or Failed" Sep 29 11:41:38.933: INFO: Pod "var-expansion-6c95befa-85b7-4e70-8e74-9fb1903760f8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.261212ms Sep 29 11:41:40.965: INFO: Pod "var-expansion-6c95befa-85b7-4e70-8e74-9fb1903760f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035135115s Sep 29 11:41:42.969: INFO: Pod "var-expansion-6c95befa-85b7-4e70-8e74-9fb1903760f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038938162s STEP: Saw pod success Sep 29 11:41:42.969: INFO: Pod "var-expansion-6c95befa-85b7-4e70-8e74-9fb1903760f8" satisfied condition "Succeeded or Failed" Sep 29 11:41:42.971: INFO: Trying to get logs from node kali-worker2 pod var-expansion-6c95befa-85b7-4e70-8e74-9fb1903760f8 container dapi-container: STEP: delete the pod Sep 29 11:41:43.009: INFO: Waiting for pod var-expansion-6c95befa-85b7-4e70-8e74-9fb1903760f8 to disappear Sep 29 11:41:43.023: INFO: Pod var-expansion-6c95befa-85b7-4e70-8e74-9fb1903760f8 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:41:43.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6615" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":303,"completed":202,"skipped":3233,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:41:43.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-9623 STEP: creating service affinity-nodeport in namespace services-9623 STEP: creating replication controller affinity-nodeport in namespace services-9623 I0929 11:41:43.154284 7 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-9623, replica count: 3 I0929 11:41:46.204704 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0929 11:41:49.204966 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 29 11:41:49.214: INFO: Creating new exec pod Sep 29 11:41:54.262: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-9623 execpod-affinityc9zsl -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport 80' Sep 29 11:41:54.484: INFO: stderr: "I0929 11:41:54.410492 2240 log.go:181] (0xc0003078c0) (0xc000578be0) Create stream\nI0929 11:41:54.410540 2240 log.go:181] (0xc0003078c0) (0xc000578be0) Stream added, broadcasting: 1\nI0929 11:41:54.415151 2240 log.go:181] (0xc0003078c0) Reply frame received for 1\nI0929 11:41:54.415183 2240 log.go:181] (0xc0003078c0) (0xc000578000) Create stream\nI0929 11:41:54.415194 2240 log.go:181] (0xc0003078c0) (0xc000578000) Stream added, broadcasting: 3\nI0929 11:41:54.415993 2240 log.go:181] (0xc0003078c0) Reply frame received for 3\nI0929 11:41:54.416034 2240 log.go:181] (0xc0003078c0) (0xc000b260a0) Create stream\nI0929 11:41:54.416047 2240 log.go:181] (0xc0003078c0) (0xc000b260a0) Stream added, broadcasting: 5\nI0929 11:41:54.417036 2240 log.go:181] (0xc0003078c0) Reply frame received for 5\nI0929 11:41:54.475355 2240 log.go:181] (0xc0003078c0) Data frame received for 5\nI0929 11:41:54.475407 2240 log.go:181] (0xc000b260a0) (5) Data frame handling\nI0929 11:41:54.475448 2240 log.go:181] (0xc000b260a0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport 80\nI0929 11:41:54.475980 2240 log.go:181] (0xc0003078c0) Data frame received for 5\nI0929 11:41:54.475995 2240 log.go:181] (0xc000b260a0) (5) Data frame handling\nI0929 11:41:54.476002 2240 log.go:181] (0xc000b260a0) (5) Data frame sent\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\nI0929 11:41:54.476212 2240 log.go:181] (0xc0003078c0) Data frame received for 3\nI0929 11:41:54.476256 2240 log.go:181] (0xc000578000) (3) Data frame handling\nI0929 11:41:54.476468 2240 log.go:181] (0xc0003078c0) Data frame received for 5\nI0929 11:41:54.476495 2240 log.go:181] (0xc000b260a0) (5) Data frame handling\nI0929 11:41:54.478448 2240 log.go:181] (0xc0003078c0) Data frame received for 1\nI0929 11:41:54.478479 2240 log.go:181] (0xc000578be0) (1) Data frame handling\nI0929 11:41:54.478496 2240 log.go:181] (0xc000578be0) (1) Data frame sent\nI0929 11:41:54.478520 2240 log.go:181] (0xc0003078c0) (0xc000578be0) Stream removed, broadcasting: 1\nI0929 11:41:54.478551 2240 log.go:181] (0xc0003078c0) Go away received\nI0929 11:41:54.478981 2240 log.go:181] (0xc0003078c0) (0xc000578be0) Stream removed, broadcasting: 1\nI0929 11:41:54.478997 2240 log.go:181] (0xc0003078c0) (0xc000578000) Stream removed, broadcasting: 3\nI0929 11:41:54.479007 2240 log.go:181] (0xc0003078c0) (0xc000b260a0) Stream removed, broadcasting: 5\n" Sep 29 11:41:54.484: INFO: stdout: "" Sep 29 11:41:54.484: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-9623 execpod-affinityc9zsl -- /bin/sh -x -c nc -zv -t -w 2 10.99.230.193 80' Sep 29 11:41:54.704: INFO: stderr: "I0929 11:41:54.633822 2258 log.go:181] (0xc0009af4a0) (0xc0006d2aa0) Create stream\nI0929 11:41:54.633876 2258 log.go:181] (0xc0009af4a0) (0xc0006d2aa0) Stream added, broadcasting: 1\nI0929 11:41:54.640076 2258 log.go:181] (0xc0009af4a0) Reply frame received for 1\nI0929 11:41:54.640123 2258 log.go:181] (0xc0009af4a0) (0xc0006d2000) Create stream\nI0929 11:41:54.640141 2258 log.go:181] (0xc0009af4a0) (0xc0006d2000) Stream added, broadcasting: 3\nI0929 11:41:54.641626 2258 log.go:181] (0xc0009af4a0) Reply frame received for 3\nI0929 11:41:54.641703 2258 log.go:181] (0xc0009af4a0) (0xc000309400) Create stream\nI0929 11:41:54.641722 2258 log.go:181] (0xc0009af4a0) (0xc000309400) Stream added, broadcasting: 5\nI0929 11:41:54.642772 2258 log.go:181] (0xc0009af4a0) Reply frame received for 5\nI0929 11:41:54.697581 2258 log.go:181] (0xc0009af4a0) Data frame received for 5\nI0929 11:41:54.697611 2258 log.go:181] (0xc000309400) (5) Data frame handling\nI0929 11:41:54.697620 2258 log.go:181] (0xc000309400) (5) Data frame sent\nI0929 11:41:54.697626 2258 log.go:181] (0xc0009af4a0) Data frame received for 5\nI0929 11:41:54.697633 2258 log.go:181] (0xc000309400) (5) Data frame handling\n+ nc -zv -t -w 2 10.99.230.193 80\nConnection to 10.99.230.193 80 port [tcp/http] succeeded!\nI0929 11:41:54.697657 2258 log.go:181] (0xc0009af4a0) Data frame received for 3\nI0929 11:41:54.697664 2258 log.go:181] (0xc0006d2000) (3) Data frame handling\nI0929 11:41:54.698904 2258 log.go:181] (0xc0009af4a0) Data frame received for 1\nI0929 11:41:54.698945 2258 log.go:181] (0xc0006d2aa0) (1) Data frame handling\nI0929 11:41:54.698972 2258 log.go:181] (0xc0006d2aa0) (1) Data frame sent\nI0929 11:41:54.698992 2258 log.go:181] (0xc0009af4a0) (0xc0006d2aa0) Stream removed, broadcasting: 1\nI0929 11:41:54.699012 2258 log.go:181] (0xc0009af4a0) Go away received\nI0929 11:41:54.699401 2258 log.go:181] (0xc0009af4a0) (0xc0006d2aa0) Stream removed, broadcasting: 1\nI0929 11:41:54.699428 2258 log.go:181] (0xc0009af4a0) (0xc0006d2000) Stream removed, broadcasting: 3\nI0929 11:41:54.699438 2258 log.go:181] (0xc0009af4a0) (0xc000309400) Stream removed, broadcasting: 5\n" Sep 29 11:41:54.704: INFO: stdout: "" Sep 29 11:41:54.704: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-9623 execpod-affinityc9zsl -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.12 30221' Sep 29 11:41:54.929: INFO: stderr: "I0929 11:41:54.842166 2276 log.go:181] (0xc000c274a0) (0xc000742aa0) Create stream\nI0929 11:41:54.842248 2276 log.go:181] (0xc000c274a0) (0xc000742aa0) Stream added, broadcasting: 1\nI0929 11:41:54.846869 2276 log.go:181] (0xc000c274a0) Reply frame received for 1\nI0929 11:41:54.846903 2276 log.go:181] (0xc000c274a0) (0xc000822960) Create stream\nI0929 11:41:54.846911 2276 log.go:181] (0xc000c274a0) (0xc000822960) Stream added, broadcasting: 3\nI0929 11:41:54.847712 2276 log.go:181] (0xc000c274a0) Reply frame received for 3\nI0929 11:41:54.847748 2276 log.go:181] (0xc000c274a0) (0xc000bba000) Create stream\nI0929 11:41:54.847755 2276 log.go:181] (0xc000c274a0) (0xc000bba000) Stream added, broadcasting: 5\nI0929 11:41:54.848500 2276 log.go:181] (0xc000c274a0) Reply frame received for 5\nI0929 11:41:54.921531 2276 log.go:181] (0xc000c274a0) Data frame received for 5\nI0929 11:41:54.921575 2276 log.go:181] (0xc000bba000) (5) Data frame handling\nI0929 11:41:54.921596 2276 log.go:181] (0xc000bba000) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.12 30221\nConnection to 172.18.0.12 30221 port [tcp/30221] succeeded!\nI0929 11:41:54.922043 2276 log.go:181] (0xc000c274a0) Data frame received for 5\nI0929 11:41:54.922078 2276 log.go:181] (0xc000c274a0) Data frame received for 3\nI0929 11:41:54.922107 2276 log.go:181] (0xc000822960) (3) Data frame handling\nI0929 11:41:54.922131 2276 log.go:181] (0xc000bba000) (5) Data frame handling\nI0929 11:41:54.923552 2276 log.go:181] (0xc000c274a0) Data frame received for 1\nI0929 11:41:54.923584 2276 log.go:181] (0xc000742aa0) (1) Data frame handling\nI0929 11:41:54.923609 2276 log.go:181] (0xc000742aa0) (1) Data frame sent\nI0929 11:41:54.923736 2276 log.go:181] (0xc000c274a0) (0xc000742aa0) Stream removed, broadcasting: 1\nI0929 11:41:54.923774 2276 log.go:181] (0xc000c274a0) Go away received\nI0929 11:41:54.924284 2276 log.go:181] (0xc000c274a0) (0xc000742aa0) Stream removed, broadcasting: 1\nI0929 11:41:54.924326 2276 log.go:181] (0xc000c274a0) (0xc000822960) Stream removed, broadcasting: 3\nI0929 11:41:54.924341 2276 log.go:181] (0xc000c274a0) (0xc000bba000) Stream removed, broadcasting: 5\n" Sep 29 11:41:54.929: INFO: stdout: "" Sep 29 11:41:54.929: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-9623 execpod-affinityc9zsl -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.13 30221' Sep 29 11:41:55.163: INFO: stderr: "I0929 11:41:55.086527 2294 log.go:181] (0xc000230000) (0xc0000cde00) Create stream\nI0929 11:41:55.086602 2294 log.go:181] (0xc000230000) (0xc0000cde00) Stream added, broadcasting: 1\nI0929 11:41:55.091040 2294 log.go:181] (0xc000230000) Reply frame received for 1\nI0929 11:41:55.091105 2294 log.go:181] (0xc000230000) (0xc0008605a0) Create stream\nI0929 11:41:55.091133 2294 log.go:181] (0xc000230000) (0xc0008605a0) Stream added, broadcasting: 3\nI0929 11:41:55.092751 2294 log.go:181] (0xc000230000) Reply frame received for 3\nI0929 11:41:55.092800 2294 log.go:181] (0xc000230000) (0xc000820320) Create stream\nI0929 11:41:55.092814 2294 log.go:181] (0xc000230000) (0xc000820320) Stream added, broadcasting: 5\nI0929 11:41:55.093843 2294 log.go:181] (0xc000230000) Reply frame received for 5\nI0929 11:41:55.154997 2294 log.go:181] (0xc000230000) Data frame received for 5\nI0929 11:41:55.155045 2294 log.go:181] (0xc000820320) (5) Data frame handling\nI0929 11:41:55.155076 2294 log.go:181] (0xc000820320) (5) Data frame sent\nI0929 11:41:55.155092 2294 log.go:181] (0xc000230000) Data frame received for 5\nI0929 11:41:55.155104 2294 log.go:181] (0xc000820320) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.13 30221\nConnection to 172.18.0.13 30221 port [tcp/30221] succeeded!\nI0929 11:41:55.155155 2294 log.go:181] (0xc000820320) (5) Data frame sent\nI0929 11:41:55.155481 2294 log.go:181] (0xc000230000) Data frame received for 3\nI0929 11:41:55.155511 2294 log.go:181] (0xc0008605a0) (3) Data frame handling\nI0929 11:41:55.155781 2294 log.go:181] (0xc000230000) Data frame received for 5\nI0929 11:41:55.155802 2294 log.go:181] (0xc000820320) (5) Data frame handling\nI0929 11:41:55.157702 2294 log.go:181] (0xc000230000) Data frame received for 1\nI0929 11:41:55.157741 2294 log.go:181] (0xc0000cde00) (1) Data frame handling\nI0929 11:41:55.157764 2294 log.go:181] (0xc0000cde00) (1) Data frame sent\nI0929 11:41:55.157789 2294 log.go:181] (0xc000230000) (0xc0000cde00) Stream removed, broadcasting: 1\nI0929 11:41:55.157825 2294 log.go:181] (0xc000230000) Go away received\nI0929 11:41:55.158332 2294 log.go:181] (0xc000230000) (0xc0000cde00) Stream removed, broadcasting: 1\nI0929 11:41:55.158378 2294 log.go:181] (0xc000230000) (0xc0008605a0) Stream removed, broadcasting: 3\nI0929 11:41:55.158410 2294 log.go:181] (0xc000230000) (0xc000820320) Stream removed, broadcasting: 5\n" Sep 29 11:41:55.163: INFO: stdout: "" Sep 29 11:41:55.163: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-9623 execpod-affinityc9zsl -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.12:30221/ ; done' Sep 29 11:41:55.501: INFO: stderr: "I0929 11:41:55.307788 2312 log.go:181] (0xc0006e93f0) (0xc00081e8c0) Create stream\nI0929 11:41:55.307852 2312 log.go:181] (0xc0006e93f0) (0xc00081e8c0) Stream added, broadcasting: 1\nI0929 11:41:55.310760 2312 log.go:181] (0xc0006e93f0) Reply frame received for 1\nI0929 11:41:55.310805 2312 log.go:181] (0xc0006e93f0) (0xc0007a6320) Create stream\nI0929 11:41:55.310828 2312 log.go:181] (0xc0006e93f0) (0xc0007a6320) Stream added, broadcasting: 3\nI0929 11:41:55.311945 2312 log.go:181] (0xc0006e93f0) Reply frame received for 3\nI0929 11:41:55.311988 2312 log.go:181] (0xc0006e93f0) (0xc0006e0460) Create stream\nI0929 11:41:55.312012 2312 log.go:181] (0xc0006e93f0) (0xc0006e0460) Stream added, broadcasting: 5\nI0929 11:41:55.313085 2312 log.go:181] (0xc0006e93f0) Reply frame received for 5\nI0929 11:41:55.387244 2312 log.go:181] (0xc0006e93f0) Data frame received for 3\nI0929 11:41:55.387289 2312 log.go:181] (0xc0007a6320) (3) Data frame handling\nI0929 11:41:55.387305 2312 log.go:181] (0xc0007a6320) (3) Data frame sent\nI0929 11:41:55.387335 2312 log.go:181] (0xc0006e93f0) Data frame received for 5\nI0929 11:41:55.387346 2312 log.go:181] (0xc0006e0460) (5) Data frame handling\nI0929 11:41:55.387358 2312 log.go:181] (0xc0006e0460) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:30221/\nI0929 11:41:55.394691 2312 log.go:181] (0xc0006e93f0) Data frame received for 3\nI0929 11:41:55.394724 2312 log.go:181] (0xc0007a6320) (3) Data frame handling\nI0929 11:41:55.394755 2312 log.go:181] (0xc0007a6320) (3) Data frame sent\nI0929 11:41:55.395619 2312 log.go:181] (0xc0006e93f0) Data frame received for 5\nI0929 11:41:55.395646 2312 log.go:181] (0xc0006e0460) (5) Data frame handling\nI0929 11:41:55.395659 2312 log.go:181] (0xc0006e0460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:30221/\nI0929 11:41:55.395678 2312 log.go:181] (0xc0006e93f0) Data frame received for 3\nI0929 11:41:55.395688 2312 log.go:181] (0xc0007a6320) (3) Data frame handling\nI0929 11:41:55.395700 2312 log.go:181] (0xc0007a6320) (3) Data frame sent\nI0929 11:41:55.402741 2312 log.go:181] (0xc0006e93f0) Data frame received for 3\nI0929 11:41:55.402765 2312 log.go:181] (0xc0007a6320) (3) Data frame handling\nI0929 11:41:55.402786 2312 log.go:181] (0xc0007a6320) (3) Data frame sent\nI0929 11:41:55.403351 2312 log.go:181] (0xc0006e93f0) Data frame received for 5\nI0929 11:41:55.403371 2312 log.go:181] (0xc0006e0460) (5) Data frame handling\nI0929 11:41:55.403390 2312 log.go:181] (0xc0006e0460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:30221/\nI0929 11:41:55.405350 2312 log.go:181] (0xc0006e93f0) Data frame received for 3\nI0929 11:41:55.405378 2312 log.go:181] (0xc0007a6320) (3) Data frame handling\nI0929 11:41:55.405398 2312 log.go:181] (0xc0007a6320) (3) Data frame sent\nI0929 11:41:55.406958 2312 log.go:181] (0xc0006e93f0) Data frame received for 3\nI0929 11:41:55.406975 2312 log.go:181] (0xc0007a6320) (3) Data frame handling\nI0929 11:41:55.406992 2312 log.go:181] (0xc0007a6320) (3) Data frame sent\nI0929 11:41:55.407652 2312 log.go:181] (0xc0006e93f0) Data frame received for 5\nI0929 11:41:55.407701 2312 log.go:181] (0xc0006e0460) (5) Data frame handling\nI0929 11:41:55.407725 2312 log.go:181] (0xc0006e0460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:30221/\nI0929 11:41:55.407751 2312 log.go:181] (0xc0006e93f0) Data frame received for 3\nI0929 11:41:55.407767 2312 log.go:181] (0xc0007a6320) (3) Data frame handling\nI0929 11:41:55.407783 2312 log.go:181] (0xc0007a6320) (3) Data frame sent\nI0929 11:41:55.414851 2312 log.go:181] (0xc0006e93f0) Data frame received for 3\nI0929 11:41:55.414870 2312 log.go:181] (0xc0007a6320) (3) Data frame handling\nI0929 11:41:55.414880 2312 log.go:181] (0xc0007a6320) (3) Data frame sent\nI0929 11:41:55.415602 2312 log.go:181] (0xc0006e93f0) Data frame received for 5\nI0929 11:41:55.415630 2312 log.go:181] (0xc0006e0460) (5) Data frame handling\nI0929 11:41:55.415641 2312 log.go:181] (0xc0006e0460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:30221/\nI0929 11:41:55.415665 2312 log.go:181] (0xc0006e93f0) Data frame received for 3\nI0929 11:41:55.415681 2312 log.go:181] (0xc0007a6320) (3) Data frame handling\nI0929 11:41:55.415691 2312 log.go:181] (0xc0007a6320) (3) Data frame sent\nI0929 11:41:55.421263 2312 log.go:181] (0xc0006e93f0) Data frame received for 3\nI0929 11:41:55.421282 2312 log.go:181] (0xc0007a6320) (3) Data frame handling\nI0929 11:41:55.421297 2312 log.go:181] (0xc0007a6320) (3) Data frame sent\nI0929 11:41:55.421704 2312 log.go:181] (0xc0006e93f0) Data frame received for 5\nI0929 11:41:55.421717 2312 log.go:181] (0xc0006e0460) (5) Data frame handling\nI0929 11:41:55.421723 2312 log.go:181] (0xc0006e0460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:30221/\nI0929 11:41:55.421813 2312 log.go:181] (0xc0006e93f0) Data frame received for 3\nI0929 11:41:55.421837 2312 log.go:181] (0xc0007a6320) (3) Data frame handling\nI0929 11:41:55.421858 2312 log.go:181] (0xc0007a6320) (3) Data frame sent\nI0929 11:41:55.429105 2312 log.go:181] (0xc0006e93f0) Data frame received for 3\nI0929 11:41:55.429118 2312 log.go:181] (0xc0007a6320) (3) Data frame handling\nI0929 11:41:55.429124 2312 log.go:181] (0xc0007a6320) (3) Data frame sent\nI0929 11:41:55.430081 2312 log.go:181] (0xc0006e93f0) Data frame received for 5\nI0929 11:41:55.430122 2312 log.go:181] (0xc0006e0460) (5) Data frame handling\nI0929 11:41:55.430137 2312 log.go:181] (0xc0006e0460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:30221/\nI0929 11:41:55.430151 2312 log.go:181] (0xc0006e93f0) Data frame received for 3\nI0929 11:41:55.430158 2312 log.go:181] (0xc0007a6320) (3) Data frame handling\nI0929 11:41:55.430165 2312 log.go:181] (0xc0007a6320) (3) Data frame sent\nI0929 11:41:55.434455 2312 log.go:181] (0xc0006e93f0) Data frame received for 3\nI0929 11:41:55.434493 2312 log.go:181] (0xc0007a6320) (3) Data frame handling\nI0929 11:41:55.434529 2312 log.go:181] (0xc0007a6320) (3) Data frame sent\nI0929 11:41:55.434949 2312 log.go:181] (0xc0006e93f0) Data frame received for 3\nI0929 11:41:55.434965 2312 log.go:181] (0xc0007a6320) (3) Data frame handling\nI0929 11:41:55.434980 2312 log.go:181] (0xc0007a6320) (3) Data frame sent\nI0929 11:41:55.435008 2312 log.go:181] (0xc0006e93f0) Data frame received for 5\nI0929 11:41:55.435025 2312 log.go:181] (0xc0006e0460) (5) Data frame handling\nI0929 11:41:55.435042 2312 log.go:181] (0xc0006e0460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:30221/\nI0929 11:41:55.442307 2312 log.go:181] (0xc0006e93f0) Data frame received for 3\nI0929 11:41:55.442332 2312 log.go:181] (0xc0007a6320) (3) Data frame handling\nI0929 11:41:55.442349 2312 log.go:181] (0xc0007a6320) (3) Data frame sent\nI0929 11:41:55.442755 2312 log.go:181] (0xc0006e93f0) Data frame received for 5\nI0929 11:41:55.442771 2312 log.go:181] (0xc0006e0460) (5) Data frame handling\nI0929 11:41:55.442778 2312 log.go:181] (0xc0006e0460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:30221/\nI0929 11:41:55.442828 2312 log.go:181] (0xc0006e93f0) Data frame received for 3\nI0929 11:41:55.442841 2312 log.go:181] (0xc0007a6320) (3) Data frame handling\nI0929 11:41:55.442852 2312 log.go:181] (0xc0007a6320) (3) Data frame sent\nI0929 11:41:55.449583 2312 log.go:181] (0xc0006e93f0) Data frame received for 3\nI0929 11:41:55.449604 2312 log.go:181] (0xc0007a6320) (3) Data frame handling\nI0929 11:41:55.449619 2312 log.go:181] (0xc0007a6320) (3) Data frame sent\nI0929 11:41:55.450119 2312 log.go:181] (0xc0006e93f0) Data frame received for 5\nI0929 11:41:55.450153 2312 log.go:181] (0xc0006e0460) (5) Data frame handling\nI0929 11:41:55.450165 2312 log.go:181] (0xc0006e0460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:30221/\nI0929 11:41:55.450203 2312 log.go:181] (0xc0006e93f0) Data frame received for 3\nI0929 11:41:55.450216 2312 log.go:181] (0xc0007a6320) (3) Data frame handling\nI0929 11:41:55.450234 2312 log.go:181] (0xc0007a6320) (3) Data frame sent\nI0929 11:41:55.453935 2312 log.go:181] (0xc0006e93f0) Data frame received for 3\nI0929 11:41:55.453956 2312 log.go:181] (0xc0007a6320) (3) Data frame handling\nI0929 11:41:55.454020 2312 log.go:181] (0xc0007a6320) (3) Data frame sent\nI0929 11:41:55.454199 2312 log.go:181] (0xc0006e93f0) Data frame received for 5\nI0929 11:41:55.454222 2312 log.go:181] (0xc0006e0460) (5) Data frame handling\nI0929 11:41:55.454230 2312 log.go:181] (0xc0006e0460) (5) Data frame sent\nI0929 11:41:55.454235 2312 log.go:181] (0xc0006e93f0) Data frame received for 5\nI0929 11:41:55.454240 2312 log.go:181] (0xc0006e0460) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:30221/\nI0929 11:41:55.454264 2312 log.go:181] (0xc0006e0460) (5) Data frame sent\nI0929 11:41:55.454272 2312 log.go:181] (0xc0006e93f0) Data frame received for 3\nI0929 11:41:55.454277 2312 log.go:181] (0xc0007a6320) (3) Data frame handling\nI0929 11:41:55.454283 2312 log.go:181] (0xc0007a6320) (3) Data frame sent\nI0929 11:41:55.460986 2312 log.go:181] (0xc0006e93f0) Data frame received for 3\nI0929 11:41:55.461006 2312 log.go:181] (0xc0007a6320) (3) Data frame handling\nI0929 11:41:55.461031 2312 log.go:181] (0xc0007a6320) (3) Data frame sent\nI0929 11:41:55.461861 2312 log.go:181] (0xc0006e93f0) Data frame received for 3\nI0929 11:41:55.461901 2312 log.go:181] (0xc0007a6320) (3) Data frame handling\nI0929 11:41:55.461916 2312 log.go:181] (0xc0007a6320) (3) Data frame sent\nI0929 11:41:55.461933 2312 log.go:181] (0xc0006e93f0) Data frame received for 5\nI0929 11:41:55.461943 2312 log.go:181] (0xc0006e0460) (5) Data frame handling\nI0929 11:41:55.461954 2312 log.go:181] (0xc0006e0460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:30221/\nI0929 11:41:55.466987 2312 log.go:181] (0xc0006e93f0) Data frame received for 3\nI0929 11:41:55.467024 2312 log.go:181] (0xc0007a6320) (3) Data frame handling\nI0929 11:41:55.467047 2312 log.go:181] (0xc0007a6320) (3) Data frame sent\nI0929 11:41:55.467756 2312 log.go:181] (0xc0006e93f0) Data frame received for 5\nI0929 11:41:55.467786 2312 log.go:181] (0xc0006e0460) (5) Data frame handling\nI0929 11:41:55.467798 2312 log.go:181] (0xc0006e0460) (5) Data frame sent\nI0929 11:41:55.467808 2312 log.go:181] (0xc0006e93f0) Data frame received for 5\nI0929 11:41:55.467817 2312 log.go:181] (0xc0006e0460) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:30221/\nI0929 11:41:55.467839 2312 log.go:181] (0xc0006e0460) (5) Data frame sent\nI0929 11:41:55.467848 2312 log.go:181] (0xc0006e93f0) Data frame received for 3\nI0929 11:41:55.467857 2312 log.go:181] (0xc0007a6320) (3) Data frame handling\nI0929 11:41:55.467878 2312 log.go:181] (0xc0007a6320) (3) Data frame sent\nI0929 11:41:55.475463 2312 log.go:181] (0xc0006e93f0) Data frame received for 3\nI0929 11:41:55.475495 2312 log.go:181] (0xc0007a6320) (3) Data frame handling\nI0929 11:41:55.475519 2312 log.go:181] (0xc0007a6320) (3) Data frame sent\nI0929 11:41:55.476330 2312 log.go:181] (0xc0006e93f0) Data frame received for 5\nI0929 11:41:55.476369 2312 log.go:181] (0xc0006e0460) (5) Data frame handling\nI0929 11:41:55.476385 2312 log.go:181] (0xc0006e0460) (5) Data frame sent\nI0929 11:41:55.476397 2312 log.go:181] (0xc0006e93f0) Data frame received for 5\nI0929 11:41:55.476407 2312 log.go:181] (0xc0006e0460) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:30221/\nI0929 11:41:55.476436 2312 log.go:181] (0xc0006e0460) (5) Data frame sent\nI0929 11:41:55.476457 2312 log.go:181] (0xc0006e93f0) Data frame received for 3\nI0929 11:41:55.476486 2312 log.go:181] (0xc0007a6320) (3) Data frame handling\nI0929 11:41:55.476507 2312 log.go:181] (0xc0007a6320) (3) Data frame sent\nI0929 11:41:55.481629 2312 log.go:181] (0xc0006e93f0) Data frame received for 3\nI0929 11:41:55.481647 2312 log.go:181] (0xc0007a6320) (3) Data frame handling\nI0929 11:41:55.481658 2312 log.go:181] (0xc0007a6320) (3) Data frame sent\nI0929 11:41:55.482208 2312 log.go:181] (0xc0006e93f0) Data frame received for 5\nI0929 11:41:55.482223 2312 log.go:181] (0xc0006e0460) (5) Data frame handling\nI0929 11:41:55.482233 2312 log.go:181] (0xc0006e0460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:30221/\nI0929 11:41:55.482318 2312 log.go:181] (0xc0006e93f0) Data frame received for 3\nI0929 11:41:55.482337 2312 log.go:181] (0xc0007a6320) (3) Data frame handling\nI0929 11:41:55.482354 2312 log.go:181] (0xc0007a6320) (3) Data frame sent\nI0929 11:41:55.488697 2312 log.go:181] (0xc0006e93f0) Data frame received for 3\nI0929 11:41:55.488715 2312 log.go:181] (0xc0007a6320) (3) Data frame handling\nI0929 11:41:55.488724 2312 log.go:181] (0xc0007a6320) (3) Data frame sent\nI0929 11:41:55.489373 2312 log.go:181] (0xc0006e93f0) Data frame received for 3\nI0929 11:41:55.489391 2312 log.go:181] (0xc0007a6320) (3) Data frame handling\nI0929 11:41:55.489402 2312 log.go:181] (0xc0007a6320) (3) Data frame sent\nI0929 11:41:55.489417 2312 log.go:181] (0xc0006e93f0) Data frame received for 5\nI0929 11:41:55.489427 2312 log.go:181] (0xc0006e0460) (5) Data frame handling\nI0929 11:41:55.489439 2312 log.go:181] (0xc0006e0460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:30221/\nI0929 11:41:55.493496 2312 log.go:181] (0xc0006e93f0) Data frame received for 3\nI0929 11:41:55.493530 2312 log.go:181] (0xc0007a6320) (3) Data frame handling\nI0929 11:41:55.493555 2312 log.go:181] (0xc0007a6320) (3) Data frame sent\nI0929 11:41:55.494185 2312 log.go:181] (0xc0006e93f0) Data frame received for 5\nI0929 11:41:55.494211 2312 log.go:181] (0xc0006e0460) (5) Data frame handling\nI0929 11:41:55.494333 2312 log.go:181] (0xc0006e93f0) Data frame received for 3\nI0929 11:41:55.494365 2312 log.go:181] (0xc0007a6320) (3) Data frame handling\nI0929 11:41:55.496447 2312 log.go:181] (0xc0006e93f0) Data frame received for 1\nI0929 11:41:55.496482 2312 log.go:181] (0xc00081e8c0) (1) Data frame handling\nI0929 11:41:55.496527 2312 log.go:181] (0xc00081e8c0) (1) Data frame sent\nI0929 11:41:55.496561 2312 log.go:181] (0xc0006e93f0) (0xc00081e8c0) Stream removed, broadcasting: 1\nI0929 11:41:55.496593 2312 log.go:181] (0xc0006e93f0) Go away received\nI0929 11:41:55.497081 2312 log.go:181] (0xc0006e93f0) (0xc00081e8c0) Stream removed, broadcasting: 1\nI0929 11:41:55.497107 2312 log.go:181] (0xc0006e93f0) (0xc0007a6320) Stream removed, broadcasting: 3\nI0929 11:41:55.497118 2312 log.go:181] (0xc0006e93f0) (0xc0006e0460) Stream removed, broadcasting: 5\n" Sep 29 11:41:55.502: INFO: stdout: "\naffinity-nodeport-pc7mr\naffinity-nodeport-pc7mr\naffinity-nodeport-pc7mr\naffinity-nodeport-pc7mr\naffinity-nodeport-pc7mr\naffinity-nodeport-pc7mr\naffinity-nodeport-pc7mr\naffinity-nodeport-pc7mr\naffinity-nodeport-pc7mr\naffinity-nodeport-pc7mr\naffinity-nodeport-pc7mr\naffinity-nodeport-pc7mr\naffinity-nodeport-pc7mr\naffinity-nodeport-pc7mr\naffinity-nodeport-pc7mr\naffinity-nodeport-pc7mr" Sep 29 11:41:55.502: INFO: Received response from host: affinity-nodeport-pc7mr Sep 29 11:41:55.502: INFO: Received response from host: affinity-nodeport-pc7mr Sep 29 11:41:55.502: INFO: Received response from host: affinity-nodeport-pc7mr Sep 29 11:41:55.502: INFO: Received response from host: affinity-nodeport-pc7mr Sep 29 11:41:55.502: INFO: Received response from host: affinity-nodeport-pc7mr Sep 29 11:41:55.502: INFO: Received response from host: affinity-nodeport-pc7mr Sep 29 11:41:55.502: INFO: Received response from host: affinity-nodeport-pc7mr Sep 29 11:41:55.502: INFO: Received response from host: affinity-nodeport-pc7mr Sep 29 11:41:55.502: INFO: Received response from host: affinity-nodeport-pc7mr Sep 29 11:41:55.502: INFO: Received response from host: affinity-nodeport-pc7mr Sep 29 11:41:55.502: INFO: Received response from host: affinity-nodeport-pc7mr Sep 29 11:41:55.502: INFO: Received response from host: affinity-nodeport-pc7mr Sep 29 11:41:55.502: INFO: Received response from host: affinity-nodeport-pc7mr Sep 29 11:41:55.502: INFO: Received response from host: affinity-nodeport-pc7mr Sep 29 11:41:55.502: INFO: Received response from host: affinity-nodeport-pc7mr Sep 29 11:41:55.502: INFO: Received response from host: affinity-nodeport-pc7mr Sep 29 11:41:55.502: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-9623, will wait for the garbage collector to delete the pods Sep 29 11:41:55.602: INFO: Deleting ReplicationController affinity-nodeport took: 8.359089ms Sep 29 11:41:56.102: INFO: Terminating ReplicationController affinity-nodeport pods took: 500.238625ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:42:01.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9623" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:18.021 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":303,"completed":203,"skipped":3249,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:42:01.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Sep 29 11:42:06.235: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:42:06.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6423" for this suite. • [SLOW TEST:5.391 seconds] [sig-apps] ReplicaSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":303,"completed":204,"skipped":3298,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:42:06.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-5661d545-3c5e-4163-8c9e-8508786f8f9c STEP: Creating a pod to test consume configMaps Sep 29 11:42:06.627: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ee3ea1e6-bd7c-42a9-a647-472b15e8201b" in namespace "projected-5769" to be "Succeeded or Failed" Sep 29 11:42:06.679: INFO: Pod "pod-projected-configmaps-ee3ea1e6-bd7c-42a9-a647-472b15e8201b": Phase="Pending", Reason="", readiness=false. Elapsed: 51.626784ms Sep 29 11:42:08.768: INFO: Pod "pod-projected-configmaps-ee3ea1e6-bd7c-42a9-a647-472b15e8201b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.140664256s Sep 29 11:42:10.772: INFO: Pod "pod-projected-configmaps-ee3ea1e6-bd7c-42a9-a647-472b15e8201b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.144925796s STEP: Saw pod success Sep 29 11:42:10.772: INFO: Pod "pod-projected-configmaps-ee3ea1e6-bd7c-42a9-a647-472b15e8201b" satisfied condition "Succeeded or Failed" Sep 29 11:42:10.775: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-ee3ea1e6-bd7c-42a9-a647-472b15e8201b container projected-configmap-volume-test: STEP: delete the pod Sep 29 11:42:10.902: INFO: Waiting for pod pod-projected-configmaps-ee3ea1e6-bd7c-42a9-a647-472b15e8201b to disappear Sep 29 11:42:10.933: INFO: Pod pod-projected-configmaps-ee3ea1e6-bd7c-42a9-a647-472b15e8201b no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:42:10.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5769" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":303,"completed":205,"skipped":3331,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:42:10.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 29 11:42:11.034: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:42:15.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9708" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":303,"completed":206,"skipped":3351,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:42:15.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-01985dd9-31cc-4bac-9d72-4a9916c21a65 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:42:21.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-474" for this suite. • [SLOW TEST:6.144 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":207,"skipped":3354,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:42:21.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-68686f78-6507-4585-adb6-6bdac58c3c5f STEP: Creating a pod to test consume configMaps Sep 29 11:42:21.402: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-950ca633-b8e3-4e7e-ad70-449d7e08640d" in namespace "projected-7310" to be "Succeeded or Failed" Sep 29 11:42:21.411: INFO: Pod "pod-projected-configmaps-950ca633-b8e3-4e7e-ad70-449d7e08640d": Phase="Pending", Reason="", readiness=false. Elapsed: 9.844575ms Sep 29 11:42:23.416: INFO: Pod "pod-projected-configmaps-950ca633-b8e3-4e7e-ad70-449d7e08640d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014444984s Sep 29 11:42:25.433: INFO: Pod "pod-projected-configmaps-950ca633-b8e3-4e7e-ad70-449d7e08640d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031608089s STEP: Saw pod success Sep 29 11:42:25.433: INFO: Pod "pod-projected-configmaps-950ca633-b8e3-4e7e-ad70-449d7e08640d" satisfied condition "Succeeded or Failed" Sep 29 11:42:25.437: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-950ca633-b8e3-4e7e-ad70-449d7e08640d container projected-configmap-volume-test: STEP: delete the pod Sep 29 11:42:25.500: INFO: Waiting for pod pod-projected-configmaps-950ca633-b8e3-4e7e-ad70-449d7e08640d to disappear Sep 29 11:42:25.531: INFO: Pod pod-projected-configmaps-950ca633-b8e3-4e7e-ad70-449d7e08640d no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:42:25.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7310" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":208,"skipped":3360,"failed":0} ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:42:25.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service nodeport-test with type=NodePort in namespace services-4755 STEP: creating replication controller nodeport-test in namespace services-4755 I0929 11:42:25.804120 7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-4755, replica count: 2 I0929 11:42:28.854565 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0929 11:42:31.854785 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 29 11:42:31.854: INFO: Creating new exec pod Sep 29 11:42:36.892: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-4755 execpodpfll4 -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Sep 29 11:42:37.137: INFO: stderr: "I0929 11:42:37.037704 2330 log.go:181] (0xc0001f74a0) (0xc0001ee820) Create stream\nI0929 11:42:37.037768 2330 log.go:181] (0xc0001f74a0) (0xc0001ee820) Stream added, broadcasting: 1\nI0929 11:42:37.043313 2330 log.go:181] (0xc0001f74a0) Reply frame received for 1\nI0929 11:42:37.043387 2330 log.go:181] (0xc0001f74a0) (0xc000d02000) Create stream\nI0929 11:42:37.043413 2330 log.go:181] (0xc0001f74a0) (0xc000d02000) Stream added, broadcasting: 3\nI0929 11:42:37.044356 2330 log.go:181] (0xc0001f74a0) Reply frame received for 3\nI0929 11:42:37.044392 2330 log.go:181] (0xc0001f74a0) (0xc0001ee000) Create stream\nI0929 11:42:37.044401 2330 log.go:181] (0xc0001f74a0) (0xc0001ee000) Stream added, broadcasting: 5\nI0929 11:42:37.045586 2330 log.go:181] (0xc0001f74a0) Reply frame received for 5\nI0929 11:42:37.129652 2330 log.go:181] (0xc0001f74a0) Data frame received for 5\nI0929 11:42:37.129688 2330 log.go:181] (0xc0001ee000) (5) Data frame handling\nI0929 11:42:37.129721 2330 log.go:181] (0xc0001ee000) (5) Data frame sent\nI0929 11:42:37.129741 2330 log.go:181] (0xc0001f74a0) Data frame received for 5\nI0929 11:42:37.129759 2330 log.go:181] (0xc0001ee000) (5) Data frame handling\n+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0929 11:42:37.129782 2330 log.go:181] (0xc0001f74a0) Data frame received for 3\nI0929 11:42:37.129805 2330 log.go:181] (0xc000d02000) (3) Data frame handling\nI0929 11:42:37.129833 2330 log.go:181] (0xc0001ee000) (5) Data frame sent\nI0929 11:42:37.130191 2330 log.go:181] (0xc0001f74a0) Data frame received for 5\nI0929 11:42:37.130218 2330 log.go:181] (0xc0001ee000) (5) Data frame handling\nI0929 11:42:37.132132 2330 log.go:181] (0xc0001f74a0) Data frame received for 1\nI0929 11:42:37.132166 2330 log.go:181] (0xc0001ee820) (1) Data frame handling\nI0929 11:42:37.132184 2330 log.go:181] (0xc0001ee820) (1) Data frame sent\nI0929 11:42:37.132199 2330 log.go:181] (0xc0001f74a0) (0xc0001ee820) Stream removed, broadcasting: 1\nI0929 11:42:37.132216 2330 log.go:181] (0xc0001f74a0) Go away received\nI0929 11:42:37.132715 2330 log.go:181] (0xc0001f74a0) (0xc0001ee820) Stream removed, broadcasting: 1\nI0929 11:42:37.132737 2330 log.go:181] (0xc0001f74a0) (0xc000d02000) Stream removed, broadcasting: 3\nI0929 11:42:37.132749 2330 log.go:181] (0xc0001f74a0) (0xc0001ee000) Stream removed, broadcasting: 5\n" Sep 29 11:42:37.137: INFO: stdout: "" Sep 29 11:42:37.138: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-4755 execpodpfll4 -- /bin/sh -x -c nc -zv -t -w 2 10.103.239.107 80' Sep 29 11:42:37.338: INFO: stderr: "I0929 11:42:37.261449 2348 log.go:181] (0xc000928e70) (0xc00029e140) Create stream\nI0929 11:42:37.261494 2348 log.go:181] (0xc000928e70) (0xc00029e140) Stream added, broadcasting: 1\nI0929 11:42:37.266261 2348 log.go:181] (0xc000928e70) Reply frame received for 1\nI0929 11:42:37.266308 2348 log.go:181] (0xc000928e70) (0xc000892140) Create stream\nI0929 11:42:37.266323 2348 log.go:181] (0xc000928e70) (0xc000892140) Stream added, broadcasting: 3\nI0929 11:42:37.267180 2348 log.go:181] (0xc000928e70) Reply frame received for 3\nI0929 11:42:37.267245 2348 log.go:181] (0xc000928e70) (0xc000c400a0) Create stream\nI0929 11:42:37.267271 2348 log.go:181] (0xc000928e70) (0xc000c400a0) Stream added, broadcasting: 5\nI0929 11:42:37.268192 2348 log.go:181] (0xc000928e70) Reply frame received for 5\nI0929 11:42:37.331849 2348 log.go:181] (0xc000928e70) Data frame received for 5\nI0929 11:42:37.331892 2348 log.go:181] (0xc000c400a0) (5) Data frame handling\nI0929 11:42:37.331910 2348 log.go:181] (0xc000c400a0) (5) Data frame sent\nI0929 11:42:37.331921 2348 log.go:181] (0xc000928e70) Data frame received for 5\nI0929 11:42:37.331931 2348 log.go:181] (0xc000c400a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.103.239.107 80\nConnection to 10.103.239.107 80 port [tcp/http] succeeded!\nI0929 11:42:37.331957 2348 log.go:181] (0xc000928e70) Data frame received for 3\nI0929 11:42:37.331966 2348 log.go:181] (0xc000892140) (3) Data frame handling\nI0929 11:42:37.333592 2348 log.go:181] (0xc000928e70) Data frame received for 1\nI0929 11:42:37.333637 2348 log.go:181] (0xc00029e140) (1) Data frame handling\nI0929 11:42:37.333659 2348 log.go:181] (0xc00029e140) (1) Data frame sent\nI0929 11:42:37.333684 2348 log.go:181] (0xc000928e70) (0xc00029e140) Stream removed, broadcasting: 1\nI0929 11:42:37.333721 2348 log.go:181] (0xc000928e70) Go away received\nI0929 11:42:37.334128 2348 log.go:181] (0xc000928e70) (0xc00029e140) Stream removed, broadcasting: 1\nI0929 11:42:37.334165 2348 log.go:181] (0xc000928e70) (0xc000892140) Stream removed, broadcasting: 3\nI0929 11:42:37.334179 2348 log.go:181] (0xc000928e70) (0xc000c400a0) Stream removed, broadcasting: 5\n" Sep 29 11:42:37.338: INFO: stdout: "" Sep 29 11:42:37.339: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-4755 execpodpfll4 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.12 31148' Sep 29 11:42:37.546: INFO: stderr: "I0929 11:42:37.474801 2367 log.go:181] (0xc00003a4d0) (0xc0001a2280) Create stream\nI0929 11:42:37.474863 2367 log.go:181] (0xc00003a4d0) (0xc0001a2280) Stream added, broadcasting: 1\nI0929 11:42:37.479209 2367 log.go:181] (0xc00003a4d0) Reply frame received for 1\nI0929 11:42:37.479261 2367 log.go:181] (0xc00003a4d0) (0xc0001a3360) Create stream\nI0929 11:42:37.479275 2367 log.go:181] (0xc00003a4d0) (0xc0001a3360) Stream added, broadcasting: 3\nI0929 11:42:37.480505 2367 log.go:181] (0xc00003a4d0) Reply frame received for 3\nI0929 11:42:37.480554 2367 log.go:181] (0xc00003a4d0) (0xc000ae1860) Create stream\nI0929 11:42:37.480568 2367 log.go:181] (0xc00003a4d0) (0xc000ae1860) Stream added, broadcasting: 5\nI0929 11:42:37.481826 2367 log.go:181] (0xc00003a4d0) Reply frame received for 5\nI0929 11:42:37.540364 2367 log.go:181] (0xc00003a4d0) Data frame received for 3\nI0929 11:42:37.540413 2367 log.go:181] (0xc0001a3360) (3) Data frame handling\nI0929 11:42:37.540451 2367 log.go:181] (0xc00003a4d0) Data frame received for 5\nI0929 11:42:37.540465 2367 log.go:181] (0xc000ae1860) (5) Data frame handling\nI0929 11:42:37.540477 2367 log.go:181] (0xc000ae1860) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.12 31148\nConnection to 172.18.0.12 31148 port [tcp/31148] succeeded!\nI0929 11:42:37.540953 2367 log.go:181] (0xc00003a4d0) Data frame received for 5\nI0929 11:42:37.540988 2367 log.go:181] (0xc000ae1860) (5) Data frame handling\nI0929 11:42:37.542466 2367 log.go:181] (0xc00003a4d0) Data frame received for 1\nI0929 11:42:37.542493 2367 log.go:181] (0xc0001a2280) (1) Data frame handling\nI0929 11:42:37.542511 2367 log.go:181] (0xc0001a2280) (1) Data frame sent\nI0929 11:42:37.542527 2367 log.go:181] (0xc00003a4d0) (0xc0001a2280) Stream removed, broadcasting: 1\nI0929 11:42:37.542557 2367 log.go:181] (0xc00003a4d0) Go away received\nI0929 11:42:37.542862 2367 log.go:181] (0xc00003a4d0) (0xc0001a2280) Stream removed, broadcasting: 1\nI0929 11:42:37.542875 2367 log.go:181] (0xc00003a4d0) (0xc0001a3360) Stream removed, broadcasting: 3\nI0929 11:42:37.542880 2367 log.go:181] (0xc00003a4d0) (0xc000ae1860) Stream removed, broadcasting: 5\n" Sep 29 11:42:37.546: INFO: stdout: "" Sep 29 11:42:37.546: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-4755 execpodpfll4 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.13 31148' Sep 29 11:42:37.743: INFO: stderr: "I0929 11:42:37.668471 2386 log.go:181] (0xc000958e70) (0xc000e165a0) Create stream\nI0929 11:42:37.668522 2386 log.go:181] (0xc000958e70) (0xc000e165a0) Stream added, broadcasting: 1\nI0929 11:42:37.672588 2386 log.go:181] (0xc000958e70) Reply frame received for 1\nI0929 11:42:37.672637 2386 log.go:181] (0xc000958e70) (0xc0008fca00) Create stream\nI0929 11:42:37.672658 2386 log.go:181] (0xc000958e70) (0xc0008fca00) Stream added, broadcasting: 3\nI0929 11:42:37.673658 2386 log.go:181] (0xc000958e70) Reply frame received for 3\nI0929 11:42:37.673692 2386 log.go:181] (0xc000958e70) (0xc000c3e000) Create stream\nI0929 11:42:37.673702 2386 log.go:181] (0xc000958e70) (0xc000c3e000) Stream added, broadcasting: 5\nI0929 11:42:37.674461 2386 log.go:181] (0xc000958e70) Reply frame received for 5\nI0929 11:42:37.735744 2386 log.go:181] (0xc000958e70) Data frame received for 5\nI0929 11:42:37.735773 2386 log.go:181] (0xc000c3e000) (5) Data frame handling\nI0929 11:42:37.735793 2386 log.go:181] (0xc000c3e000) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.13 31148\nConnection to 172.18.0.13 31148 port [tcp/31148] succeeded!\nI0929 11:42:37.736006 2386 log.go:181] (0xc000958e70) Data frame received for 5\nI0929 11:42:37.736047 2386 log.go:181] (0xc000c3e000) (5) Data frame handling\nI0929 11:42:37.736223 2386 log.go:181] (0xc000958e70) Data frame received for 3\nI0929 11:42:37.736254 2386 log.go:181] (0xc0008fca00) (3) Data frame handling\nI0929 11:42:37.737852 2386 log.go:181] (0xc000958e70) Data frame received for 1\nI0929 11:42:37.737889 2386 log.go:181] (0xc000e165a0) (1) Data frame handling\nI0929 11:42:37.737910 2386 log.go:181] (0xc000e165a0) (1) Data frame sent\nI0929 11:42:37.737931 2386 log.go:181] (0xc000958e70) (0xc000e165a0) Stream removed, broadcasting: 1\nI0929 11:42:37.737953 2386 log.go:181] (0xc000958e70) Go away received\nI0929 11:42:37.738415 2386 log.go:181] (0xc000958e70) (0xc000e165a0) Stream removed, broadcasting: 1\nI0929 11:42:37.738443 2386 log.go:181] (0xc000958e70) (0xc0008fca00) Stream removed, broadcasting: 3\nI0929 11:42:37.738456 2386 log.go:181] (0xc000958e70) (0xc000c3e000) Stream removed, broadcasting: 5\n" Sep 29 11:42:37.743: INFO: stdout: "" [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:42:37.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4755" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:12.165 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":303,"completed":209,"skipped":3360,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:42:37.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs Sep 29 11:42:37.854: INFO: Waiting up to 5m0s for pod "pod-07ee9a02-9549-4ec3-9766-192bf3fb72b2" in namespace "emptydir-4055" to be "Succeeded or Failed" Sep 29 11:42:37.918: INFO: Pod "pod-07ee9a02-9549-4ec3-9766-192bf3fb72b2": Phase="Pending", Reason="", readiness=false. Elapsed: 63.75309ms Sep 29 11:42:39.922: INFO: Pod "pod-07ee9a02-9549-4ec3-9766-192bf3fb72b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067753585s Sep 29 11:42:41.925: INFO: Pod "pod-07ee9a02-9549-4ec3-9766-192bf3fb72b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.071124834s STEP: Saw pod success Sep 29 11:42:41.925: INFO: Pod "pod-07ee9a02-9549-4ec3-9766-192bf3fb72b2" satisfied condition "Succeeded or Failed" Sep 29 11:42:41.927: INFO: Trying to get logs from node kali-worker pod pod-07ee9a02-9549-4ec3-9766-192bf3fb72b2 container test-container: STEP: delete the pod Sep 29 11:42:41.942: INFO: Waiting for pod pod-07ee9a02-9549-4ec3-9766-192bf3fb72b2 to disappear Sep 29 11:42:41.947: INFO: Pod pod-07ee9a02-9549-4ec3-9766-192bf3fb72b2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:42:41.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4055" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":210,"skipped":3377,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:42:41.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 29 11:42:42.080: INFO: Waiting up to 5m0s for pod "downwardapi-volume-94ba04a6-1f22-4615-a05b-4df7a932ae58" in namespace "projected-5586" to be "Succeeded or Failed" Sep 29 11:42:42.083: INFO: Pod "downwardapi-volume-94ba04a6-1f22-4615-a05b-4df7a932ae58": Phase="Pending", Reason="", readiness=false. Elapsed: 2.902358ms Sep 29 11:42:44.087: INFO: Pod "downwardapi-volume-94ba04a6-1f22-4615-a05b-4df7a932ae58": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006965845s Sep 29 11:42:46.091: INFO: Pod "downwardapi-volume-94ba04a6-1f22-4615-a05b-4df7a932ae58": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010919477s STEP: Saw pod success Sep 29 11:42:46.091: INFO: Pod "downwardapi-volume-94ba04a6-1f22-4615-a05b-4df7a932ae58" satisfied condition "Succeeded or Failed" Sep 29 11:42:46.093: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-94ba04a6-1f22-4615-a05b-4df7a932ae58 container client-container: STEP: delete the pod Sep 29 11:42:46.218: INFO: Waiting for pod downwardapi-volume-94ba04a6-1f22-4615-a05b-4df7a932ae58 to disappear Sep 29 11:42:46.227: INFO: Pod downwardapi-volume-94ba04a6-1f22-4615-a05b-4df7a932ae58 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:42:46.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5586" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":211,"skipped":3379,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:42:46.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-dfca83f1-4410-4b3a-a35d-83186d3ee6d8 in namespace container-probe-8467 Sep 29 11:42:50.365: INFO: Started pod busybox-dfca83f1-4410-4b3a-a35d-83186d3ee6d8 in namespace container-probe-8467 STEP: checking the pod's current state and verifying that restartCount is present Sep 29 11:42:50.367: INFO: Initial restart count of pod busybox-dfca83f1-4410-4b3a-a35d-83186d3ee6d8 is 0 Sep 29 11:43:42.521: INFO: Restart count of pod container-probe-8467/busybox-dfca83f1-4410-4b3a-a35d-83186d3ee6d8 is now 1 (52.153312389s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:43:42.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8467" for this suite. • [SLOW TEST:56.379 seconds] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":303,"completed":212,"skipped":3391,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:43:42.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [sig-storage] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in volume subpath Sep 29 11:43:42.686: INFO: Waiting up to 5m0s for pod "var-expansion-095b39cc-9406-44cd-aa19-7ca30e27c1ac" in namespace "var-expansion-4276" to be "Succeeded or Failed" Sep 29 11:43:42.743: INFO: Pod "var-expansion-095b39cc-9406-44cd-aa19-7ca30e27c1ac": Phase="Pending", Reason="", readiness=false. Elapsed: 57.132587ms Sep 29 11:43:44.749: INFO: Pod "var-expansion-095b39cc-9406-44cd-aa19-7ca30e27c1ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062656841s Sep 29 11:43:46.753: INFO: Pod "var-expansion-095b39cc-9406-44cd-aa19-7ca30e27c1ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.067247121s STEP: Saw pod success Sep 29 11:43:46.753: INFO: Pod "var-expansion-095b39cc-9406-44cd-aa19-7ca30e27c1ac" satisfied condition "Succeeded or Failed" Sep 29 11:43:46.756: INFO: Trying to get logs from node kali-worker2 pod var-expansion-095b39cc-9406-44cd-aa19-7ca30e27c1ac container dapi-container: STEP: delete the pod Sep 29 11:43:46.774: INFO: Waiting for pod var-expansion-095b39cc-9406-44cd-aa19-7ca30e27c1ac to disappear Sep 29 11:43:46.797: INFO: Pod var-expansion-095b39cc-9406-44cd-aa19-7ca30e27c1ac no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:43:46.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4276" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":303,"completed":213,"skipped":3417,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:43:46.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 29 11:43:47.415: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"769c9c13-8623-4648-83f4-1f0698f509b5", Controller:(*bool)(0xc003d013f2), BlockOwnerDeletion:(*bool)(0xc003d013f3)}} Sep 29 11:43:47.432: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"c3dca631-2b28-4a5c-b6a6-9a2ae92dba8a", Controller:(*bool)(0xc0037946da), BlockOwnerDeletion:(*bool)(0xc0037946db)}} Sep 29 11:43:47.489: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"bbaf3d9f-9217-43db-ad9a-50ac38acf4eb", Controller:(*bool)(0xc003d015fa), BlockOwnerDeletion:(*bool)(0xc003d015fb)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:43:52.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1592" for this suite. • [SLOW TEST:5.781 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":303,"completed":214,"skipped":3430,"failed":0} SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:43:52.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-508b276f-64ac-4ed0-88c1-d3b41fcd36e8 STEP: Creating a pod to test consume configMaps Sep 29 11:43:52.692: INFO: Waiting up to 5m0s for pod "pod-configmaps-bd9b2737-d325-4267-adb3-24da439dded0" in namespace "configmap-4436" to be "Succeeded or Failed" Sep 29 11:43:52.713: INFO: Pod "pod-configmaps-bd9b2737-d325-4267-adb3-24da439dded0": Phase="Pending", Reason="", readiness=false. Elapsed: 20.989829ms Sep 29 11:43:54.719: INFO: Pod "pod-configmaps-bd9b2737-d325-4267-adb3-24da439dded0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02618936s Sep 29 11:43:56.750: INFO: Pod "pod-configmaps-bd9b2737-d325-4267-adb3-24da439dded0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.057370054s STEP: Saw pod success Sep 29 11:43:56.750: INFO: Pod "pod-configmaps-bd9b2737-d325-4267-adb3-24da439dded0" satisfied condition "Succeeded or Failed" Sep 29 11:43:56.753: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-bd9b2737-d325-4267-adb3-24da439dded0 container configmap-volume-test: STEP: delete the pod Sep 29 11:43:56.786: INFO: Waiting for pod pod-configmaps-bd9b2737-d325-4267-adb3-24da439dded0 to disappear Sep 29 11:43:56.815: INFO: Pod pod-configmaps-bd9b2737-d325-4267-adb3-24da439dded0 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:43:56.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4436" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":215,"skipped":3436,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:43:56.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-3350 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-3350 I0929 11:43:56.978642 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-3350, replica count: 2 I0929 11:44:00.029413 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0929 11:44:03.029684 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 29 11:44:03.029: INFO: Creating new exec pod Sep 29 11:44:08.056: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-3350 execpodb8qs6 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Sep 29 11:44:08.304: INFO: stderr: "I0929 11:44:08.204265 2404 log.go:181] (0xc00003a0b0) (0xc000a18140) Create stream\nI0929 11:44:08.204314 2404 log.go:181] (0xc00003a0b0) (0xc000a18140) Stream added, broadcasting: 1\nI0929 11:44:08.205974 2404 log.go:181] (0xc00003a0b0) Reply frame received for 1\nI0929 11:44:08.206014 2404 log.go:181] (0xc00003a0b0) (0xc000962460) Create stream\nI0929 11:44:08.206026 2404 log.go:181] (0xc00003a0b0) (0xc000962460) Stream added, broadcasting: 3\nI0929 11:44:08.207025 2404 log.go:181] (0xc00003a0b0) Reply frame received for 3\nI0929 11:44:08.207057 2404 log.go:181] (0xc00003a0b0) (0xc000962500) Create stream\nI0929 11:44:08.207067 2404 log.go:181] (0xc00003a0b0) (0xc000962500) Stream added, broadcasting: 5\nI0929 11:44:08.208209 2404 log.go:181] (0xc00003a0b0) Reply frame received for 5\nI0929 11:44:08.296192 2404 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0929 11:44:08.296225 2404 log.go:181] (0xc000962500) (5) Data frame handling\nI0929 11:44:08.296248 2404 log.go:181] (0xc000962500) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0929 11:44:08.296647 2404 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0929 11:44:08.296689 2404 log.go:181] (0xc000962500) (5) Data frame handling\nI0929 11:44:08.296725 2404 log.go:181] (0xc000962500) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0929 11:44:08.296829 2404 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0929 11:44:08.296952 2404 log.go:181] (0xc000962500) (5) Data frame handling\nI0929 11:44:08.297308 2404 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0929 11:44:08.297328 2404 log.go:181] (0xc000962460) (3) Data frame handling\nI0929 11:44:08.299127 2404 log.go:181] (0xc00003a0b0) Data frame received for 1\nI0929 11:44:08.299152 2404 log.go:181] (0xc000a18140) (1) Data frame handling\nI0929 11:44:08.299166 2404 log.go:181] (0xc000a18140) (1) Data frame sent\nI0929 11:44:08.299219 2404 log.go:181] (0xc00003a0b0) (0xc000a18140) Stream removed, broadcasting: 1\nI0929 11:44:08.299249 2404 log.go:181] (0xc00003a0b0) Go away received\nI0929 11:44:08.299650 2404 log.go:181] (0xc00003a0b0) (0xc000a18140) Stream removed, broadcasting: 1\nI0929 11:44:08.299675 2404 log.go:181] (0xc00003a0b0) (0xc000962460) Stream removed, broadcasting: 3\nI0929 11:44:08.299687 2404 log.go:181] (0xc00003a0b0) (0xc000962500) Stream removed, broadcasting: 5\n" Sep 29 11:44:08.304: INFO: stdout: "" Sep 29 11:44:08.305: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-3350 execpodb8qs6 -- /bin/sh -x -c nc -zv -t -w 2 10.100.201.211 80' Sep 29 11:44:08.535: INFO: stderr: "I0929 11:44:08.450506 2422 log.go:181] (0xc000b32000) (0xc000b2a140) Create stream\nI0929 11:44:08.450568 2422 log.go:181] (0xc000b32000) (0xc000b2a140) Stream added, broadcasting: 1\nI0929 11:44:08.455232 2422 log.go:181] (0xc000b32000) Reply frame received for 1\nI0929 11:44:08.455303 2422 log.go:181] (0xc000b32000) (0xc000740000) Create stream\nI0929 11:44:08.455329 2422 log.go:181] (0xc000b32000) (0xc000740000) Stream added, broadcasting: 3\nI0929 11:44:08.456683 2422 log.go:181] (0xc000b32000) Reply frame received for 3\nI0929 11:44:08.456726 2422 log.go:181] (0xc000b32000) (0xc000b2a1e0) Create stream\nI0929 11:44:08.456745 2422 log.go:181] (0xc000b32000) (0xc000b2a1e0) Stream added, broadcasting: 5\nI0929 11:44:08.457730 2422 log.go:181] (0xc000b32000) Reply frame received for 5\nI0929 11:44:08.525882 2422 log.go:181] (0xc000b32000) Data frame received for 3\nI0929 11:44:08.526020 2422 log.go:181] (0xc000740000) (3) Data frame handling\nI0929 11:44:08.526155 2422 log.go:181] (0xc000b32000) Data frame received for 5\nI0929 11:44:08.526356 2422 log.go:181] (0xc000b2a1e0) (5) Data frame handling\nI0929 11:44:08.526434 2422 log.go:181] (0xc000b2a1e0) (5) Data frame sent\nI0929 11:44:08.526503 2422 log.go:181] (0xc000b32000) Data frame received for 5\nI0929 11:44:08.526573 2422 log.go:181] (0xc000b2a1e0) (5) Data frame handling\n+ nc -zv -t -w 2 10.100.201.211 80\nConnection to 10.100.201.211 80 port [tcp/http] succeeded!\nI0929 11:44:08.528559 2422 log.go:181] (0xc000b32000) Data frame received for 1\nI0929 11:44:08.528585 2422 log.go:181] (0xc000b2a140) (1) Data frame handling\nI0929 11:44:08.528598 2422 log.go:181] (0xc000b2a140) (1) Data frame sent\nI0929 11:44:08.528612 2422 log.go:181] (0xc000b32000) (0xc000b2a140) Stream removed, broadcasting: 1\nI0929 11:44:08.528631 2422 log.go:181] (0xc000b32000) Go away received\nI0929 11:44:08.529098 2422 log.go:181] (0xc000b32000) (0xc000b2a140) Stream removed, broadcasting: 1\nI0929 11:44:08.529117 2422 log.go:181] (0xc000b32000) (0xc000740000) Stream removed, broadcasting: 3\nI0929 11:44:08.529125 2422 log.go:181] (0xc000b32000) (0xc000b2a1e0) Stream removed, broadcasting: 5\n" Sep 29 11:44:08.535: INFO: stdout: "" Sep 29 11:44:08.535: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-3350 execpodb8qs6 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.12 30998' Sep 29 11:44:08.747: INFO: stderr: "I0929 11:44:08.658901 2441 log.go:181] (0xc00003b3f0) (0xc0005b48c0) Create stream\nI0929 11:44:08.658946 2441 log.go:181] (0xc00003b3f0) (0xc0005b48c0) Stream added, broadcasting: 1\nI0929 11:44:08.663865 2441 log.go:181] (0xc00003b3f0) Reply frame received for 1\nI0929 11:44:08.663913 2441 log.go:181] (0xc00003b3f0) (0xc0009a0460) Create stream\nI0929 11:44:08.663926 2441 log.go:181] (0xc00003b3f0) (0xc0009a0460) Stream added, broadcasting: 3\nI0929 11:44:08.664768 2441 log.go:181] (0xc00003b3f0) Reply frame received for 3\nI0929 11:44:08.664798 2441 log.go:181] (0xc00003b3f0) (0xc00043bd60) Create stream\nI0929 11:44:08.664809 2441 log.go:181] (0xc00003b3f0) (0xc00043bd60) Stream added, broadcasting: 5\nI0929 11:44:08.665700 2441 log.go:181] (0xc00003b3f0) Reply frame received for 5\nI0929 11:44:08.740575 2441 log.go:181] (0xc00003b3f0) Data frame received for 5\nI0929 11:44:08.740633 2441 log.go:181] (0xc00043bd60) (5) Data frame handling\nI0929 11:44:08.740648 2441 log.go:181] (0xc00043bd60) (5) Data frame sent\nI0929 11:44:08.740657 2441 log.go:181] (0xc00003b3f0) Data frame received for 5\nI0929 11:44:08.740665 2441 log.go:181] (0xc00043bd60) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.12 30998\nConnection to 172.18.0.12 30998 port [tcp/30998] succeeded!\nI0929 11:44:08.740690 2441 log.go:181] (0xc00003b3f0) Data frame received for 3\nI0929 11:44:08.740700 2441 log.go:181] (0xc0009a0460) (3) Data frame handling\nI0929 11:44:08.743040 2441 log.go:181] (0xc00003b3f0) Data frame received for 1\nI0929 11:44:08.743056 2441 log.go:181] (0xc0005b48c0) (1) Data frame handling\nI0929 11:44:08.743064 2441 log.go:181] (0xc0005b48c0) (1) Data frame sent\nI0929 11:44:08.743083 2441 log.go:181] (0xc00003b3f0) (0xc0005b48c0) Stream removed, broadcasting: 1\nI0929 11:44:08.743119 2441 log.go:181] (0xc00003b3f0) Go away received\nI0929 11:44:08.743572 2441 log.go:181] (0xc00003b3f0) (0xc0005b48c0) Stream removed, broadcasting: 1\nI0929 11:44:08.743593 2441 log.go:181] (0xc00003b3f0) (0xc0009a0460) Stream removed, broadcasting: 3\nI0929 11:44:08.743601 2441 log.go:181] (0xc00003b3f0) (0xc00043bd60) Stream removed, broadcasting: 5\n" Sep 29 11:44:08.747: INFO: stdout: "" Sep 29 11:44:08.747: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-3350 execpodb8qs6 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.13 30998' Sep 29 11:44:08.955: INFO: stderr: "I0929 11:44:08.884344 2460 log.go:181] (0xc00018cf20) (0xc000ab6140) Create stream\nI0929 11:44:08.884409 2460 log.go:181] (0xc00018cf20) (0xc000ab6140) Stream added, broadcasting: 1\nI0929 11:44:08.886591 2460 log.go:181] (0xc00018cf20) Reply frame received for 1\nI0929 11:44:08.886643 2460 log.go:181] (0xc00018cf20) (0xc000c86000) Create stream\nI0929 11:44:08.886659 2460 log.go:181] (0xc00018cf20) (0xc000c86000) Stream added, broadcasting: 3\nI0929 11:44:08.887971 2460 log.go:181] (0xc00018cf20) Reply frame received for 3\nI0929 11:44:08.888009 2460 log.go:181] (0xc00018cf20) (0xc0004d00a0) Create stream\nI0929 11:44:08.888027 2460 log.go:181] (0xc00018cf20) (0xc0004d00a0) Stream added, broadcasting: 5\nI0929 11:44:08.889282 2460 log.go:181] (0xc00018cf20) Reply frame received for 5\nI0929 11:44:08.948613 2460 log.go:181] (0xc00018cf20) Data frame received for 5\nI0929 11:44:08.948671 2460 log.go:181] (0xc0004d00a0) (5) Data frame handling\nI0929 11:44:08.948695 2460 log.go:181] (0xc0004d00a0) (5) Data frame sent\nI0929 11:44:08.948716 2460 log.go:181] (0xc00018cf20) Data frame received for 5\nI0929 11:44:08.948727 2460 log.go:181] (0xc0004d00a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.13 30998\nConnection to 172.18.0.13 30998 port [tcp/30998] succeeded!\nI0929 11:44:08.948761 2460 log.go:181] (0xc00018cf20) Data frame received for 3\nI0929 11:44:08.948790 2460 log.go:181] (0xc000c86000) (3) Data frame handling\nI0929 11:44:08.950514 2460 log.go:181] (0xc00018cf20) Data frame received for 1\nI0929 11:44:08.950531 2460 log.go:181] (0xc000ab6140) (1) Data frame handling\nI0929 11:44:08.950541 2460 log.go:181] (0xc000ab6140) (1) Data frame sent\nI0929 11:44:08.950552 2460 log.go:181] (0xc00018cf20) (0xc000ab6140) Stream removed, broadcasting: 1\nI0929 11:44:08.950568 2460 log.go:181] (0xc00018cf20) Go away received\nI0929 11:44:08.950977 2460 log.go:181] (0xc00018cf20) (0xc000ab6140) Stream removed, broadcasting: 1\nI0929 11:44:08.950996 2460 log.go:181] (0xc00018cf20) (0xc000c86000) Stream removed, broadcasting: 3\nI0929 11:44:08.951007 2460 log.go:181] (0xc00018cf20) (0xc0004d00a0) Stream removed, broadcasting: 5\n" Sep 29 11:44:08.955: INFO: stdout: "" Sep 29 11:44:08.955: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:44:08.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3350" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:12.176 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":303,"completed":216,"skipped":3452,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:44:09.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:44:13.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5234" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":217,"skipped":3464,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:44:13.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:44:13.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4163" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":303,"completed":218,"skipped":3480,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:44:13.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 29 11:44:17.425: INFO: Waiting up to 5m0s for pod "client-envvars-f413ba02-122e-4535-87ff-fdab345d65bd" in namespace "pods-6682" to be "Succeeded or Failed" Sep 29 11:44:17.455: INFO: Pod "client-envvars-f413ba02-122e-4535-87ff-fdab345d65bd": Phase="Pending", Reason="", readiness=false. Elapsed: 29.873433ms Sep 29 11:44:19.481: INFO: Pod "client-envvars-f413ba02-122e-4535-87ff-fdab345d65bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055943771s Sep 29 11:44:21.486: INFO: Pod "client-envvars-f413ba02-122e-4535-87ff-fdab345d65bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060830901s STEP: Saw pod success Sep 29 11:44:21.486: INFO: Pod "client-envvars-f413ba02-122e-4535-87ff-fdab345d65bd" satisfied condition "Succeeded or Failed" Sep 29 11:44:21.489: INFO: Trying to get logs from node kali-worker2 pod client-envvars-f413ba02-122e-4535-87ff-fdab345d65bd container env3cont: STEP: delete the pod Sep 29 11:44:21.533: INFO: Waiting for pod client-envvars-f413ba02-122e-4535-87ff-fdab345d65bd to disappear Sep 29 11:44:21.576: INFO: Pod client-envvars-f413ba02-122e-4535-87ff-fdab345d65bd no longer exists [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:44:21.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6682" for this suite. • [SLOW TEST:8.290 seconds] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":303,"completed":219,"skipped":3501,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:44:21.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD Sep 29 11:44:21.633: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:44:35.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4277" for this suite. • [SLOW TEST:13.753 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":303,"completed":220,"skipped":3503,"failed":0} SSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Service endpoints latency /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:44:35.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 29 11:44:35.420: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-6428 I0929 11:44:35.436675 7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-6428, replica count: 1 I0929 11:44:36.487121 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0929 11:44:37.487298 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0929 11:44:38.487520 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0929 11:44:39.487809 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 29 11:44:39.654: INFO: Created: latency-svc-fmrmt Sep 29 11:44:39.682: INFO: Got endpoints: latency-svc-fmrmt [94.170625ms] Sep 29 11:44:39.722: INFO: Created: latency-svc-56tw8 Sep 29 11:44:39.735: INFO: Got endpoints: latency-svc-56tw8 [52.756786ms] Sep 29 11:44:39.753: INFO: Created: latency-svc-bxwjc Sep 29 11:44:39.780: INFO: Got endpoints: latency-svc-bxwjc [98.010869ms] Sep 29 11:44:39.804: INFO: Created: latency-svc-bqq49 Sep 29 11:44:39.819: INFO: Got endpoints: latency-svc-bqq49 [136.801523ms] Sep 29 11:44:39.841: INFO: Created: latency-svc-d9h2m Sep 29 11:44:39.855: INFO: Got endpoints: latency-svc-d9h2m [172.389001ms] Sep 29 11:44:39.930: INFO: Created: latency-svc-qqw25 Sep 29 11:44:39.975: INFO: Got endpoints: latency-svc-qqw25 [293.209185ms] Sep 29 11:44:40.011: INFO: Created: latency-svc-bdwd2 Sep 29 11:44:40.024: INFO: Got endpoints: latency-svc-bdwd2 [341.671723ms] Sep 29 11:44:40.067: INFO: Created: latency-svc-hsmjm Sep 29 11:44:40.084: INFO: Got endpoints: latency-svc-hsmjm [401.949971ms] Sep 29 11:44:40.117: INFO: Created: latency-svc-wd482 Sep 29 11:44:40.132: INFO: Got endpoints: latency-svc-wd482 [450.035741ms] Sep 29 11:44:40.212: INFO: Created: latency-svc-nq2dd Sep 29 11:44:40.216: INFO: Got endpoints: latency-svc-nq2dd [534.087976ms] Sep 29 11:44:40.250: INFO: Created: latency-svc-kkw6h Sep 29 11:44:40.264: INFO: Got endpoints: latency-svc-kkw6h [581.670527ms] Sep 29 11:44:40.287: INFO: Created: latency-svc-phsjb Sep 29 11:44:40.300: INFO: Got endpoints: latency-svc-phsjb [617.984254ms] Sep 29 11:44:40.349: INFO: Created: latency-svc-vmwl4 Sep 29 11:44:40.361: INFO: Got endpoints: latency-svc-vmwl4 [678.536035ms] Sep 29 11:44:40.380: INFO: Created: latency-svc-9qzgk Sep 29 11:44:40.428: INFO: Got endpoints: latency-svc-9qzgk [745.917792ms] Sep 29 11:44:40.505: INFO: Created: latency-svc-7vh5w Sep 29 11:44:40.533: INFO: Got endpoints: latency-svc-7vh5w [850.451505ms] Sep 29 11:44:40.533: INFO: Created: latency-svc-zxd2m Sep 29 11:44:40.562: INFO: Got endpoints: latency-svc-zxd2m [879.337436ms] Sep 29 11:44:40.648: INFO: Created: latency-svc-vr45f Sep 29 11:44:40.676: INFO: Got endpoints: latency-svc-vr45f [941.452649ms] Sep 29 11:44:40.676: INFO: Created: latency-svc-n6fq8 Sep 29 11:44:40.712: INFO: Got endpoints: latency-svc-n6fq8 [932.224089ms] Sep 29 11:44:40.816: INFO: Created: latency-svc-mg8x4 Sep 29 11:44:40.842: INFO: Created: latency-svc-tkdkt Sep 29 11:44:40.842: INFO: Got endpoints: latency-svc-mg8x4 [1.02353262s] Sep 29 11:44:40.860: INFO: Got endpoints: latency-svc-tkdkt [1.005236639s] Sep 29 11:44:40.978: INFO: Created: latency-svc-xbtxg Sep 29 11:44:40.983: INFO: Got endpoints: latency-svc-xbtxg [1.00733379s] Sep 29 11:44:41.028: INFO: Created: latency-svc-t8zsh Sep 29 11:44:41.072: INFO: Got endpoints: latency-svc-t8zsh [1.048251545s] Sep 29 11:44:41.132: INFO: Created: latency-svc-xfwsk Sep 29 11:44:41.151: INFO: Got endpoints: latency-svc-xfwsk [1.066673362s] Sep 29 11:44:41.196: INFO: Created: latency-svc-j8psq Sep 29 11:44:41.209: INFO: Got endpoints: latency-svc-j8psq [1.076879206s] Sep 29 11:44:41.278: INFO: Created: latency-svc-qgxqg Sep 29 11:44:41.287: INFO: Got endpoints: latency-svc-qgxqg [1.071379143s] Sep 29 11:44:41.336: INFO: Created: latency-svc-ggrt5 Sep 29 11:44:41.353: INFO: Got endpoints: latency-svc-ggrt5 [1.089331441s] Sep 29 11:44:41.415: INFO: Created: latency-svc-c8rzn Sep 29 11:44:41.419: INFO: Got endpoints: latency-svc-c8rzn [1.118490811s] Sep 29 11:44:41.473: INFO: Created: latency-svc-q46kz Sep 29 11:44:41.486: INFO: Got endpoints: latency-svc-q46kz [1.125143879s] Sep 29 11:44:41.552: INFO: Created: latency-svc-x7zs8 Sep 29 11:44:41.571: INFO: Got endpoints: latency-svc-x7zs8 [1.143209488s] Sep 29 11:44:41.618: INFO: Created: latency-svc-w6ltd Sep 29 11:44:41.631: INFO: Got endpoints: latency-svc-w6ltd [1.097993704s] Sep 29 11:44:41.705: INFO: Created: latency-svc-8ndlh Sep 29 11:44:41.709: INFO: Got endpoints: latency-svc-8ndlh [1.147028024s] Sep 29 11:44:41.748: INFO: Created: latency-svc-d8p7k Sep 29 11:44:41.757: INFO: Got endpoints: latency-svc-d8p7k [1.08011639s] Sep 29 11:44:41.841: INFO: Created: latency-svc-zp42m Sep 29 11:44:41.844: INFO: Got endpoints: latency-svc-zp42m [1.131366468s] Sep 29 11:44:41.910: INFO: Created: latency-svc-ghp6j Sep 29 11:44:41.919: INFO: Got endpoints: latency-svc-ghp6j [1.076891482s] Sep 29 11:44:41.990: INFO: Created: latency-svc-9lc7s Sep 29 11:44:41.998: INFO: Got endpoints: latency-svc-9lc7s [1.137859729s] Sep 29 11:44:42.069: INFO: Created: latency-svc-bkmmd Sep 29 11:44:42.152: INFO: Got endpoints: latency-svc-bkmmd [1.168759418s] Sep 29 11:44:42.179: INFO: Created: latency-svc-fb7h7 Sep 29 11:44:42.200: INFO: Got endpoints: latency-svc-fb7h7 [1.127585034s] Sep 29 11:44:42.230: INFO: Created: latency-svc-pg2vt Sep 29 11:44:42.307: INFO: Got endpoints: latency-svc-pg2vt [1.156121759s] Sep 29 11:44:42.342: INFO: Created: latency-svc-sdlpg Sep 29 11:44:42.384: INFO: Got endpoints: latency-svc-sdlpg [1.174725326s] Sep 29 11:44:42.439: INFO: Created: latency-svc-5hhhx Sep 29 11:44:42.455: INFO: Got endpoints: latency-svc-5hhhx [1.16737833s] Sep 29 11:44:42.482: INFO: Created: latency-svc-jxgqx Sep 29 11:44:42.498: INFO: Got endpoints: latency-svc-jxgqx [1.143958961s] Sep 29 11:44:42.523: INFO: Created: latency-svc-9krg4 Sep 29 11:44:42.565: INFO: Got endpoints: latency-svc-9krg4 [1.145484868s] Sep 29 11:44:42.588: INFO: Created: latency-svc-px47c Sep 29 11:44:42.612: INFO: Got endpoints: latency-svc-px47c [1.125424044s] Sep 29 11:44:42.647: INFO: Created: latency-svc-65phf Sep 29 11:44:42.720: INFO: Got endpoints: latency-svc-65phf [1.148819428s] Sep 29 11:44:42.734: INFO: Created: latency-svc-mjl82 Sep 29 11:44:42.764: INFO: Got endpoints: latency-svc-mjl82 [1.133491543s] Sep 29 11:44:42.883: INFO: Created: latency-svc-ndxc9 Sep 29 11:44:42.914: INFO: Got endpoints: latency-svc-ndxc9 [1.204959014s] Sep 29 11:44:42.969: INFO: Created: latency-svc-mzr7p Sep 29 11:44:43.020: INFO: Got endpoints: latency-svc-mzr7p [1.263019546s] Sep 29 11:44:43.044: INFO: Created: latency-svc-jdqwb Sep 29 11:44:43.057: INFO: Got endpoints: latency-svc-jdqwb [1.213253672s] Sep 29 11:44:43.079: INFO: Created: latency-svc-nwp8h Sep 29 11:44:43.093: INFO: Got endpoints: latency-svc-nwp8h [1.173384632s] Sep 29 11:44:43.139: INFO: Created: latency-svc-6lzbn Sep 29 11:44:43.148: INFO: Got endpoints: latency-svc-6lzbn [1.149453195s] Sep 29 11:44:43.195: INFO: Created: latency-svc-d6hxg Sep 29 11:44:43.214: INFO: Got endpoints: latency-svc-d6hxg [1.061769548s] Sep 29 11:44:43.284: INFO: Created: latency-svc-mxl4c Sep 29 11:44:43.307: INFO: Got endpoints: latency-svc-mxl4c [1.106870401s] Sep 29 11:44:43.340: INFO: Created: latency-svc-vq8qb Sep 29 11:44:43.364: INFO: Got endpoints: latency-svc-vq8qb [1.056313241s] Sep 29 11:44:43.415: INFO: Created: latency-svc-d2llv Sep 29 11:44:43.430: INFO: Got endpoints: latency-svc-d2llv [1.046278802s] Sep 29 11:44:43.447: INFO: Created: latency-svc-kt484 Sep 29 11:44:43.460: INFO: Got endpoints: latency-svc-kt484 [1.005635719s] Sep 29 11:44:43.482: INFO: Created: latency-svc-h2gmt Sep 29 11:44:43.490: INFO: Got endpoints: latency-svc-h2gmt [992.761873ms] Sep 29 11:44:43.511: INFO: Created: latency-svc-jskzj Sep 29 11:44:43.546: INFO: Got endpoints: latency-svc-jskzj [981.789467ms] Sep 29 11:44:43.586: INFO: Created: latency-svc-n7pjd Sep 29 11:44:43.605: INFO: Got endpoints: latency-svc-n7pjd [993.500795ms] Sep 29 11:44:43.640: INFO: Created: latency-svc-t4plz Sep 29 11:44:43.684: INFO: Got endpoints: latency-svc-t4plz [963.919351ms] Sep 29 11:44:43.710: INFO: Created: latency-svc-kz5d4 Sep 29 11:44:43.720: INFO: Got endpoints: latency-svc-kz5d4 [955.083133ms] Sep 29 11:44:43.745: INFO: Created: latency-svc-bt4p7 Sep 29 11:44:43.756: INFO: Got endpoints: latency-svc-bt4p7 [842.032467ms] Sep 29 11:44:43.777: INFO: Created: latency-svc-zb9c7 Sep 29 11:44:43.831: INFO: Got endpoints: latency-svc-zb9c7 [811.831778ms] Sep 29 11:44:43.868: INFO: Created: latency-svc-l29sk Sep 29 11:44:43.889: INFO: Got endpoints: latency-svc-l29sk [832.008666ms] Sep 29 11:44:43.955: INFO: Created: latency-svc-tswxl Sep 29 11:44:43.958: INFO: Got endpoints: latency-svc-tswxl [865.507111ms] Sep 29 11:44:43.985: INFO: Created: latency-svc-l246f Sep 29 11:44:43.998: INFO: Got endpoints: latency-svc-l246f [849.97046ms] Sep 29 11:44:44.017: INFO: Created: latency-svc-b88kr Sep 29 11:44:44.033: INFO: Got endpoints: latency-svc-b88kr [819.719278ms] Sep 29 11:44:44.053: INFO: Created: latency-svc-bg446 Sep 29 11:44:44.102: INFO: Got endpoints: latency-svc-bg446 [794.710055ms] Sep 29 11:44:44.135: INFO: Created: latency-svc-g2x4x Sep 29 11:44:44.148: INFO: Got endpoints: latency-svc-g2x4x [784.347623ms] Sep 29 11:44:44.165: INFO: Created: latency-svc-nbbwd Sep 29 11:44:44.179: INFO: Got endpoints: latency-svc-nbbwd [748.646626ms] Sep 29 11:44:44.235: INFO: Created: latency-svc-9g9cg Sep 29 11:44:44.240: INFO: Got endpoints: latency-svc-9g9cg [779.460284ms] Sep 29 11:44:44.294: INFO: Created: latency-svc-cb9qj Sep 29 11:44:44.311: INFO: Got endpoints: latency-svc-cb9qj [820.462777ms] Sep 29 11:44:44.330: INFO: Created: latency-svc-zzdl4 Sep 29 11:44:44.385: INFO: Got endpoints: latency-svc-zzdl4 [838.288202ms] Sep 29 11:44:44.399: INFO: Created: latency-svc-jjwq5 Sep 29 11:44:44.413: INFO: Got endpoints: latency-svc-jjwq5 [808.129117ms] Sep 29 11:44:44.435: INFO: Created: latency-svc-ktjnz Sep 29 11:44:44.444: INFO: Got endpoints: latency-svc-ktjnz [759.731547ms] Sep 29 11:44:44.468: INFO: Created: latency-svc-7b5x2 Sep 29 11:44:44.481: INFO: Got endpoints: latency-svc-7b5x2 [761.421375ms] Sep 29 11:44:44.529: INFO: Created: latency-svc-kk8ht Sep 29 11:44:44.540: INFO: Got endpoints: latency-svc-kk8ht [784.132178ms] Sep 29 11:44:44.576: INFO: Created: latency-svc-pnzm9 Sep 29 11:44:44.595: INFO: Got endpoints: latency-svc-pnzm9 [763.476345ms] Sep 29 11:44:44.673: INFO: Created: latency-svc-2pww7 Sep 29 11:44:44.679: INFO: Got endpoints: latency-svc-2pww7 [789.859366ms] Sep 29 11:44:44.713: INFO: Created: latency-svc-xf5ks Sep 29 11:44:44.734: INFO: Got endpoints: latency-svc-xf5ks [775.16948ms] Sep 29 11:44:44.755: INFO: Created: latency-svc-ldzcp Sep 29 11:44:44.770: INFO: Got endpoints: latency-svc-ldzcp [772.436202ms] Sep 29 11:44:44.816: INFO: Created: latency-svc-vq4mw Sep 29 11:44:44.824: INFO: Got endpoints: latency-svc-vq4mw [790.353269ms] Sep 29 11:44:44.867: INFO: Created: latency-svc-5kh6m Sep 29 11:44:44.885: INFO: Got endpoints: latency-svc-5kh6m [782.587779ms] Sep 29 11:44:44.912: INFO: Created: latency-svc-sfvtz Sep 29 11:44:44.977: INFO: Got endpoints: latency-svc-sfvtz [829.347089ms] Sep 29 11:44:44.983: INFO: Created: latency-svc-cwqrn Sep 29 11:44:45.011: INFO: Got endpoints: latency-svc-cwqrn [831.88628ms] Sep 29 11:44:45.047: INFO: Created: latency-svc-l2bzv Sep 29 11:44:45.059: INFO: Got endpoints: latency-svc-l2bzv [819.083147ms] Sep 29 11:44:45.109: INFO: Created: latency-svc-g2w4q Sep 29 11:44:45.115: INFO: Got endpoints: latency-svc-g2w4q [803.887492ms] Sep 29 11:44:45.146: INFO: Created: latency-svc-25fwc Sep 29 11:44:45.156: INFO: Got endpoints: latency-svc-25fwc [770.883458ms] Sep 29 11:44:45.176: INFO: Created: latency-svc-6lcnj Sep 29 11:44:45.197: INFO: Got endpoints: latency-svc-6lcnj [783.631203ms] Sep 29 11:44:45.235: INFO: Created: latency-svc-6gstv Sep 29 11:44:45.241: INFO: Got endpoints: latency-svc-6gstv [796.428717ms] Sep 29 11:44:45.265: INFO: Created: latency-svc-gjrx9 Sep 29 11:44:45.296: INFO: Got endpoints: latency-svc-gjrx9 [815.157524ms] Sep 29 11:44:45.379: INFO: Created: latency-svc-gqxxs Sep 29 11:44:45.383: INFO: Got endpoints: latency-svc-gqxxs [843.1705ms] Sep 29 11:44:45.425: INFO: Created: latency-svc-ffzfv Sep 29 11:44:45.455: INFO: Got endpoints: latency-svc-ffzfv [859.605126ms] Sep 29 11:44:45.511: INFO: Created: latency-svc-5dd9r Sep 29 11:44:45.547: INFO: Got endpoints: latency-svc-5dd9r [868.367031ms] Sep 29 11:44:45.548: INFO: Created: latency-svc-5c5hp Sep 29 11:44:45.571: INFO: Got endpoints: latency-svc-5c5hp [837.622339ms] Sep 29 11:44:45.595: INFO: Created: latency-svc-68mlb Sep 29 11:44:45.608: INFO: Got endpoints: latency-svc-68mlb [837.603188ms] Sep 29 11:44:45.647: INFO: Created: latency-svc-f8b7n Sep 29 11:44:45.677: INFO: Got endpoints: latency-svc-f8b7n [853.131142ms] Sep 29 11:44:45.707: INFO: Created: latency-svc-svm79 Sep 29 11:44:45.723: INFO: Got endpoints: latency-svc-svm79 [838.44151ms] Sep 29 11:44:45.780: INFO: Created: latency-svc-dfqmb Sep 29 11:44:45.787: INFO: Got endpoints: latency-svc-dfqmb [809.948433ms] Sep 29 11:44:45.833: INFO: Created: latency-svc-9ph7j Sep 29 11:44:45.849: INFO: Got endpoints: latency-svc-9ph7j [838.4644ms] Sep 29 11:44:45.924: INFO: Created: latency-svc-phx28 Sep 29 11:44:45.955: INFO: Got endpoints: latency-svc-phx28 [895.987996ms] Sep 29 11:44:45.998: INFO: Created: latency-svc-b6glz Sep 29 11:44:46.017: INFO: Got endpoints: latency-svc-b6glz [902.596486ms] Sep 29 11:44:46.056: INFO: Created: latency-svc-smzln Sep 29 11:44:46.066: INFO: Got endpoints: latency-svc-smzln [909.780287ms] Sep 29 11:44:46.103: INFO: Created: latency-svc-49q5j Sep 29 11:44:46.120: INFO: Got endpoints: latency-svc-49q5j [923.296582ms] Sep 29 11:44:46.139: INFO: Created: latency-svc-2b8j2 Sep 29 11:44:46.193: INFO: Got endpoints: latency-svc-2b8j2 [952.480624ms] Sep 29 11:44:46.220: INFO: Created: latency-svc-d6t4k Sep 29 11:44:46.229: INFO: Got endpoints: latency-svc-d6t4k [933.142663ms] Sep 29 11:44:46.250: INFO: Created: latency-svc-dww69 Sep 29 11:44:46.271: INFO: Got endpoints: latency-svc-dww69 [887.167576ms] Sep 29 11:44:46.325: INFO: Created: latency-svc-rdncf Sep 29 11:44:46.329: INFO: Got endpoints: latency-svc-rdncf [874.302526ms] Sep 29 11:44:46.375: INFO: Created: latency-svc-ztn22 Sep 29 11:44:46.391: INFO: Got endpoints: latency-svc-ztn22 [843.650183ms] Sep 29 11:44:46.411: INFO: Created: latency-svc-hfmq7 Sep 29 11:44:46.422: INFO: Got endpoints: latency-svc-hfmq7 [850.588107ms] Sep 29 11:44:46.469: INFO: Created: latency-svc-8bddh Sep 29 11:44:46.472: INFO: Got endpoints: latency-svc-8bddh [864.755599ms] Sep 29 11:44:46.530: INFO: Created: latency-svc-qgd5r Sep 29 11:44:46.543: INFO: Got endpoints: latency-svc-qgd5r [865.898471ms] Sep 29 11:44:46.565: INFO: Created: latency-svc-9zqkg Sep 29 11:44:46.594: INFO: Got endpoints: latency-svc-9zqkg [871.274745ms] Sep 29 11:44:46.627: INFO: Created: latency-svc-lbqgn Sep 29 11:44:46.657: INFO: Got endpoints: latency-svc-lbqgn [870.003712ms] Sep 29 11:44:46.682: INFO: Created: latency-svc-sfxj6 Sep 29 11:44:46.694: INFO: Got endpoints: latency-svc-sfxj6 [844.572914ms] Sep 29 11:44:46.739: INFO: Created: latency-svc-44wgf Sep 29 11:44:46.794: INFO: Got endpoints: latency-svc-44wgf [838.316329ms] Sep 29 11:44:46.864: INFO: Created: latency-svc-bhwkb Sep 29 11:44:46.873: INFO: Got endpoints: latency-svc-bhwkb [855.901118ms] Sep 29 11:44:46.897: INFO: Created: latency-svc-vzphm Sep 29 11:44:46.918: INFO: Got endpoints: latency-svc-vzphm [852.041481ms] Sep 29 11:44:47.038: INFO: Created: latency-svc-mpkwp Sep 29 11:44:47.041: INFO: Got endpoints: latency-svc-mpkwp [920.970533ms] Sep 29 11:44:47.102: INFO: Created: latency-svc-2ckj9 Sep 29 11:44:47.132: INFO: Got endpoints: latency-svc-2ckj9 [939.15771ms] Sep 29 11:44:47.170: INFO: Created: latency-svc-xjltz Sep 29 11:44:47.181: INFO: Got endpoints: latency-svc-xjltz [951.265357ms] Sep 29 11:44:47.201: INFO: Created: latency-svc-kwmv2 Sep 29 11:44:47.225: INFO: Got endpoints: latency-svc-kwmv2 [954.484256ms] Sep 29 11:44:47.254: INFO: Created: latency-svc-dx9d2 Sep 29 11:44:47.295: INFO: Got endpoints: latency-svc-dx9d2 [965.73515ms] Sep 29 11:44:47.311: INFO: Created: latency-svc-btpn6 Sep 29 11:44:47.341: INFO: Got endpoints: latency-svc-btpn6 [950.051492ms] Sep 29 11:44:47.381: INFO: Created: latency-svc-f9rhs Sep 29 11:44:47.421: INFO: Got endpoints: latency-svc-f9rhs [999.273719ms] Sep 29 11:44:47.423: INFO: Created: latency-svc-zk64m Sep 29 11:44:47.440: INFO: Got endpoints: latency-svc-zk64m [967.151727ms] Sep 29 11:44:47.459: INFO: Created: latency-svc-5cjdf Sep 29 11:44:47.470: INFO: Got endpoints: latency-svc-5cjdf [927.298825ms] Sep 29 11:44:47.491: INFO: Created: latency-svc-7hp69 Sep 29 11:44:47.583: INFO: Got endpoints: latency-svc-7hp69 [988.65573ms] Sep 29 11:44:47.586: INFO: Created: latency-svc-w859f Sep 29 11:44:47.596: INFO: Got endpoints: latency-svc-w859f [938.535232ms] Sep 29 11:44:47.626: INFO: Created: latency-svc-lh6rs Sep 29 11:44:47.651: INFO: Got endpoints: latency-svc-lh6rs [957.080718ms] Sep 29 11:44:47.675: INFO: Created: latency-svc-nf5nk Sep 29 11:44:47.708: INFO: Got endpoints: latency-svc-nf5nk [914.676291ms] Sep 29 11:44:47.710: INFO: Created: latency-svc-m66rj Sep 29 11:44:47.749: INFO: Got endpoints: latency-svc-m66rj [875.130142ms] Sep 29 11:44:47.791: INFO: Created: latency-svc-dgg6p Sep 29 11:44:47.858: INFO: Got endpoints: latency-svc-dgg6p [940.07461ms] Sep 29 11:44:47.860: INFO: Created: latency-svc-qvdvh Sep 29 11:44:47.875: INFO: Got endpoints: latency-svc-qvdvh [833.299849ms] Sep 29 11:44:47.904: INFO: Created: latency-svc-xg9gb Sep 29 11:44:47.910: INFO: Got endpoints: latency-svc-xg9gb [777.771524ms] Sep 29 11:44:47.935: INFO: Created: latency-svc-rvs59 Sep 29 11:44:48.289: INFO: Got endpoints: latency-svc-rvs59 [1.10819936s] Sep 29 11:44:48.312: INFO: Created: latency-svc-vzlqf Sep 29 11:44:48.318: INFO: Got endpoints: latency-svc-vzlqf [1.092756028s] Sep 29 11:44:48.853: INFO: Created: latency-svc-68knt Sep 29 11:44:48.990: INFO: Got endpoints: latency-svc-68knt [1.6951725s] Sep 29 11:44:49.020: INFO: Created: latency-svc-2ckl8 Sep 29 11:44:49.163: INFO: Got endpoints: latency-svc-2ckl8 [1.822189018s] Sep 29 11:44:49.212: INFO: Created: latency-svc-lw2km Sep 29 11:44:49.224: INFO: Got endpoints: latency-svc-lw2km [1.802347586s] Sep 29 11:44:49.320: INFO: Created: latency-svc-sq5cr Sep 29 11:44:49.362: INFO: Got endpoints: latency-svc-sq5cr [1.922516975s] Sep 29 11:44:49.541: INFO: Created: latency-svc-tw6r5 Sep 29 11:44:49.673: INFO: Got endpoints: latency-svc-tw6r5 [2.202347735s] Sep 29 11:44:49.698: INFO: Created: latency-svc-fqbxl Sep 29 11:44:49.722: INFO: Got endpoints: latency-svc-fqbxl [2.138893049s] Sep 29 11:44:49.811: INFO: Created: latency-svc-ldmxg Sep 29 11:44:49.834: INFO: Got endpoints: latency-svc-ldmxg [2.237621451s] Sep 29 11:44:49.870: INFO: Created: latency-svc-nr7fq Sep 29 11:44:49.878: INFO: Got endpoints: latency-svc-nr7fq [2.227582053s] Sep 29 11:44:49.902: INFO: Created: latency-svc-v6xm8 Sep 29 11:44:49.942: INFO: Got endpoints: latency-svc-v6xm8 [2.233572844s] Sep 29 11:44:49.957: INFO: Created: latency-svc-gdznj Sep 29 11:44:49.969: INFO: Got endpoints: latency-svc-gdznj [2.220187278s] Sep 29 11:44:49.992: INFO: Created: latency-svc-kb7k5 Sep 29 11:44:50.011: INFO: Got endpoints: latency-svc-kb7k5 [2.153315199s] Sep 29 11:44:50.087: INFO: Created: latency-svc-pfbxt Sep 29 11:44:50.090: INFO: Got endpoints: latency-svc-pfbxt [2.215349513s] Sep 29 11:44:50.134: INFO: Created: latency-svc-65x4l Sep 29 11:44:50.166: INFO: Got endpoints: latency-svc-65x4l [2.256104034s] Sep 29 11:44:50.241: INFO: Created: latency-svc-jvjnt Sep 29 11:44:50.251: INFO: Got endpoints: latency-svc-jvjnt [1.962348579s] Sep 29 11:44:50.284: INFO: Created: latency-svc-ltkmn Sep 29 11:44:50.315: INFO: Got endpoints: latency-svc-ltkmn [1.996502641s] Sep 29 11:44:50.376: INFO: Created: latency-svc-rw4hn Sep 29 11:44:50.388: INFO: Got endpoints: latency-svc-rw4hn [1.397722635s] Sep 29 11:44:50.418: INFO: Created: latency-svc-xpg8j Sep 29 11:44:50.432: INFO: Got endpoints: latency-svc-xpg8j [1.268874495s] Sep 29 11:44:50.454: INFO: Created: latency-svc-sk4bc Sep 29 11:44:50.522: INFO: Got endpoints: latency-svc-sk4bc [1.298616516s] Sep 29 11:44:50.525: INFO: Created: latency-svc-lpzsf Sep 29 11:44:50.535: INFO: Got endpoints: latency-svc-lpzsf [1.172559634s] Sep 29 11:44:50.556: INFO: Created: latency-svc-kgb9b Sep 29 11:44:50.571: INFO: Got endpoints: latency-svc-kgb9b [898.613179ms] Sep 29 11:44:50.593: INFO: Created: latency-svc-gmf57 Sep 29 11:44:50.602: INFO: Got endpoints: latency-svc-gmf57 [880.072276ms] Sep 29 11:44:50.622: INFO: Created: latency-svc-95kff Sep 29 11:44:50.668: INFO: Got endpoints: latency-svc-95kff [834.053053ms] Sep 29 11:44:50.669: INFO: Created: latency-svc-2nd7f Sep 29 11:44:50.692: INFO: Got endpoints: latency-svc-2nd7f [813.568062ms] Sep 29 11:44:50.717: INFO: Created: latency-svc-lklcm Sep 29 11:44:50.729: INFO: Got endpoints: latency-svc-lklcm [786.661165ms] Sep 29 11:44:50.745: INFO: Created: latency-svc-44n6g Sep 29 11:44:50.792: INFO: Got endpoints: latency-svc-44n6g [822.937979ms] Sep 29 11:44:50.802: INFO: Created: latency-svc-mcgr9 Sep 29 11:44:50.819: INFO: Got endpoints: latency-svc-mcgr9 [808.260036ms] Sep 29 11:44:50.839: INFO: Created: latency-svc-npwrj Sep 29 11:44:50.865: INFO: Got endpoints: latency-svc-npwrj [775.290366ms] Sep 29 11:44:50.936: INFO: Created: latency-svc-cx6cg Sep 29 11:44:50.975: INFO: Got endpoints: latency-svc-cx6cg [808.494409ms] Sep 29 11:44:51.080: INFO: Created: latency-svc-w5vfh Sep 29 11:44:51.101: INFO: Got endpoints: latency-svc-w5vfh [849.379014ms] Sep 29 11:44:51.140: INFO: Created: latency-svc-gm7vm Sep 29 11:44:51.150: INFO: Got endpoints: latency-svc-gm7vm [835.573955ms] Sep 29 11:44:51.168: INFO: Created: latency-svc-8g4w7 Sep 29 11:44:51.229: INFO: Got endpoints: latency-svc-8g4w7 [841.429458ms] Sep 29 11:44:51.234: INFO: Created: latency-svc-6282b Sep 29 11:44:51.241: INFO: Got endpoints: latency-svc-6282b [808.271866ms] Sep 29 11:44:51.262: INFO: Created: latency-svc-6h6h9 Sep 29 11:44:51.277: INFO: Got endpoints: latency-svc-6h6h9 [754.863514ms] Sep 29 11:44:51.306: INFO: Created: latency-svc-qvqcf Sep 29 11:44:51.326: INFO: Got endpoints: latency-svc-qvqcf [790.661288ms] Sep 29 11:44:51.391: INFO: Created: latency-svc-ksjpr Sep 29 11:44:51.418: INFO: Got endpoints: latency-svc-ksjpr [846.303492ms] Sep 29 11:44:51.454: INFO: Created: latency-svc-6rjj5 Sep 29 11:44:51.510: INFO: Got endpoints: latency-svc-6rjj5 [908.08193ms] Sep 29 11:44:51.516: INFO: Created: latency-svc-2q5wg Sep 29 11:44:51.534: INFO: Got endpoints: latency-svc-2q5wg [866.05883ms] Sep 29 11:44:51.570: INFO: Created: latency-svc-w5cnx Sep 29 11:44:51.593: INFO: Got endpoints: latency-svc-w5cnx [900.381034ms] Sep 29 11:44:51.649: INFO: Created: latency-svc-67ttx Sep 29 11:44:51.653: INFO: Got endpoints: latency-svc-67ttx [924.632829ms] Sep 29 11:44:51.700: INFO: Created: latency-svc-b2wcn Sep 29 11:44:51.717: INFO: Got endpoints: latency-svc-b2wcn [925.574101ms] Sep 29 11:44:51.736: INFO: Created: latency-svc-lzxzk Sep 29 11:44:51.774: INFO: Got endpoints: latency-svc-lzxzk [954.559318ms] Sep 29 11:44:51.792: INFO: Created: latency-svc-dhkb7 Sep 29 11:44:51.808: INFO: Got endpoints: latency-svc-dhkb7 [942.125292ms] Sep 29 11:44:51.828: INFO: Created: latency-svc-d9wfj Sep 29 11:44:51.844: INFO: Got endpoints: latency-svc-d9wfj [868.955066ms] Sep 29 11:44:51.864: INFO: Created: latency-svc-v2zns Sep 29 11:44:51.900: INFO: Got endpoints: latency-svc-v2zns [798.623584ms] Sep 29 11:44:51.903: INFO: Created: latency-svc-ztlcm Sep 29 11:44:51.923: INFO: Got endpoints: latency-svc-ztlcm [772.388695ms] Sep 29 11:44:51.982: INFO: Created: latency-svc-n974z Sep 29 11:44:52.056: INFO: Got endpoints: latency-svc-n974z [826.751627ms] Sep 29 11:44:52.091: INFO: Created: latency-svc-9zrrc Sep 29 11:44:52.132: INFO: Got endpoints: latency-svc-9zrrc [891.257401ms] Sep 29 11:44:52.200: INFO: Created: latency-svc-cdpgk Sep 29 11:44:52.204: INFO: Got endpoints: latency-svc-cdpgk [926.250821ms] Sep 29 11:44:52.254: INFO: Created: latency-svc-7rc7w Sep 29 11:44:52.272: INFO: Got endpoints: latency-svc-7rc7w [945.890647ms] Sep 29 11:44:52.296: INFO: Created: latency-svc-jbgwm Sep 29 11:44:52.337: INFO: Got endpoints: latency-svc-jbgwm [918.914339ms] Sep 29 11:44:52.351: INFO: Created: latency-svc-57pql Sep 29 11:44:52.362: INFO: Got endpoints: latency-svc-57pql [851.370091ms] Sep 29 11:44:52.389: INFO: Created: latency-svc-zctq7 Sep 29 11:44:52.404: INFO: Got endpoints: latency-svc-zctq7 [869.952114ms] Sep 29 11:44:52.475: INFO: Created: latency-svc-98n9w Sep 29 11:44:52.500: INFO: Created: latency-svc-snhfs Sep 29 11:44:52.501: INFO: Got endpoints: latency-svc-98n9w [907.953311ms] Sep 29 11:44:52.513: INFO: Got endpoints: latency-svc-snhfs [859.523739ms] Sep 29 11:44:52.530: INFO: Created: latency-svc-qd2fg Sep 29 11:44:52.544: INFO: Got endpoints: latency-svc-qd2fg [826.172152ms] Sep 29 11:44:52.563: INFO: Created: latency-svc-hfpcf Sep 29 11:44:52.612: INFO: Got endpoints: latency-svc-hfpcf [837.97695ms] Sep 29 11:44:52.614: INFO: Created: latency-svc-pfmlm Sep 29 11:44:52.628: INFO: Got endpoints: latency-svc-pfmlm [820.115906ms] Sep 29 11:44:52.651: INFO: Created: latency-svc-j2bks Sep 29 11:44:52.664: INFO: Got endpoints: latency-svc-j2bks [820.152461ms] Sep 29 11:44:52.686: INFO: Created: latency-svc-mnvbc Sep 29 11:44:52.744: INFO: Got endpoints: latency-svc-mnvbc [844.383474ms] Sep 29 11:44:52.768: INFO: Created: latency-svc-7f9t6 Sep 29 11:44:52.779: INFO: Got endpoints: latency-svc-7f9t6 [855.950586ms] Sep 29 11:44:52.798: INFO: Created: latency-svc-9cv9p Sep 29 11:44:52.824: INFO: Got endpoints: latency-svc-9cv9p [767.631756ms] Sep 29 11:44:52.876: INFO: Created: latency-svc-phs4d Sep 29 11:44:52.888: INFO: Got endpoints: latency-svc-phs4d [755.913382ms] Sep 29 11:44:52.908: INFO: Created: latency-svc-jk4mg Sep 29 11:44:52.924: INFO: Got endpoints: latency-svc-jk4mg [720.357133ms] Sep 29 11:44:52.954: INFO: Created: latency-svc-d2wcv Sep 29 11:44:52.966: INFO: Got endpoints: latency-svc-d2wcv [694.514626ms] Sep 29 11:44:53.008: INFO: Created: latency-svc-jfq2m Sep 29 11:44:53.014: INFO: Got endpoints: latency-svc-jfq2m [677.068179ms] Sep 29 11:44:53.014: INFO: Latencies: [52.756786ms 98.010869ms 136.801523ms 172.389001ms 293.209185ms 341.671723ms 401.949971ms 450.035741ms 534.087976ms 581.670527ms 617.984254ms 677.068179ms 678.536035ms 694.514626ms 720.357133ms 745.917792ms 748.646626ms 754.863514ms 755.913382ms 759.731547ms 761.421375ms 763.476345ms 767.631756ms 770.883458ms 772.388695ms 772.436202ms 775.16948ms 775.290366ms 777.771524ms 779.460284ms 782.587779ms 783.631203ms 784.132178ms 784.347623ms 786.661165ms 789.859366ms 790.353269ms 790.661288ms 794.710055ms 796.428717ms 798.623584ms 803.887492ms 808.129117ms 808.260036ms 808.271866ms 808.494409ms 809.948433ms 811.831778ms 813.568062ms 815.157524ms 819.083147ms 819.719278ms 820.115906ms 820.152461ms 820.462777ms 822.937979ms 826.172152ms 826.751627ms 829.347089ms 831.88628ms 832.008666ms 833.299849ms 834.053053ms 835.573955ms 837.603188ms 837.622339ms 837.97695ms 838.288202ms 838.316329ms 838.44151ms 838.4644ms 841.429458ms 842.032467ms 843.1705ms 843.650183ms 844.383474ms 844.572914ms 846.303492ms 849.379014ms 849.97046ms 850.451505ms 850.588107ms 851.370091ms 852.041481ms 853.131142ms 855.901118ms 855.950586ms 859.523739ms 859.605126ms 864.755599ms 865.507111ms 865.898471ms 866.05883ms 868.367031ms 868.955066ms 869.952114ms 870.003712ms 871.274745ms 874.302526ms 875.130142ms 879.337436ms 880.072276ms 887.167576ms 891.257401ms 895.987996ms 898.613179ms 900.381034ms 902.596486ms 907.953311ms 908.08193ms 909.780287ms 914.676291ms 918.914339ms 920.970533ms 923.296582ms 924.632829ms 925.574101ms 926.250821ms 927.298825ms 932.224089ms 933.142663ms 938.535232ms 939.15771ms 940.07461ms 941.452649ms 942.125292ms 945.890647ms 950.051492ms 951.265357ms 952.480624ms 954.484256ms 954.559318ms 955.083133ms 957.080718ms 963.919351ms 965.73515ms 967.151727ms 981.789467ms 988.65573ms 992.761873ms 993.500795ms 999.273719ms 1.005236639s 1.005635719s 1.00733379s 1.02353262s 1.046278802s 1.048251545s 1.056313241s 1.061769548s 1.066673362s 1.071379143s 1.076879206s 1.076891482s 1.08011639s 1.089331441s 1.092756028s 1.097993704s 1.106870401s 1.10819936s 1.118490811s 1.125143879s 1.125424044s 1.127585034s 1.131366468s 1.133491543s 1.137859729s 1.143209488s 1.143958961s 1.145484868s 1.147028024s 1.148819428s 1.149453195s 1.156121759s 1.16737833s 1.168759418s 1.172559634s 1.173384632s 1.174725326s 1.204959014s 1.213253672s 1.263019546s 1.268874495s 1.298616516s 1.397722635s 1.6951725s 1.802347586s 1.822189018s 1.922516975s 1.962348579s 1.996502641s 2.138893049s 2.153315199s 2.202347735s 2.215349513s 2.220187278s 2.227582053s 2.233572844s 2.237621451s 2.256104034s] Sep 29 11:44:53.014: INFO: 50 %ile: 879.337436ms Sep 29 11:44:53.014: INFO: 90 %ile: 1.213253672s Sep 29 11:44:53.014: INFO: 99 %ile: 2.237621451s Sep 29 11:44:53.014: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:44:53.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-6428" for this suite. • [SLOW TEST:17.722 seconds] [sig-network] Service endpoints latency /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":303,"completed":221,"skipped":3506,"failed":0} SS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:44:53.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7765.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7765.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7765.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7765.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7765.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-7765.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7765.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-7765.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7765.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-7765.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7765.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-7765.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7765.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 173.89.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.89.173_udp@PTR;check="$$(dig +tcp +noall +answer +search 173.89.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.89.173_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7765.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7765.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7765.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7765.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7765.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-7765.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7765.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-7765.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7765.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-7765.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7765.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-7765.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7765.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 173.89.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.89.173_udp@PTR;check="$$(dig +tcp +noall +answer +search 173.89.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.89.173_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 29 11:44:59.435: INFO: Unable to read wheezy_udp@dns-test-service.dns-7765.svc.cluster.local from pod dns-7765/dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f: the server could not find the requested resource (get pods dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f) Sep 29 11:44:59.440: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7765.svc.cluster.local from pod dns-7765/dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f: the server could not find the requested resource (get pods dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f) Sep 29 11:44:59.446: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7765.svc.cluster.local from pod dns-7765/dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f: the server could not find the requested resource (get pods dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f) Sep 29 11:44:59.452: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7765.svc.cluster.local from pod dns-7765/dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f: the server could not find the requested resource (get pods dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f) Sep 29 11:44:59.523: INFO: Unable to read jessie_udp@dns-test-service.dns-7765.svc.cluster.local from pod dns-7765/dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f: the server could not find the requested resource (get pods dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f) Sep 29 11:44:59.542: INFO: Unable to read jessie_tcp@dns-test-service.dns-7765.svc.cluster.local from pod dns-7765/dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f: the server could not find the requested resource (get pods dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f) Sep 29 11:44:59.548: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7765.svc.cluster.local from pod dns-7765/dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f: the server could not find the requested resource (get pods dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f) Sep 29 11:44:59.560: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7765.svc.cluster.local from pod dns-7765/dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f: the server could not find the requested resource (get pods dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f) Sep 29 11:44:59.752: INFO: Lookups using dns-7765/dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f failed for: [wheezy_udp@dns-test-service.dns-7765.svc.cluster.local wheezy_tcp@dns-test-service.dns-7765.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7765.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7765.svc.cluster.local jessie_udp@dns-test-service.dns-7765.svc.cluster.local jessie_tcp@dns-test-service.dns-7765.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7765.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7765.svc.cluster.local] Sep 29 11:45:04.762: INFO: Unable to read wheezy_udp@dns-test-service.dns-7765.svc.cluster.local from pod dns-7765/dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f: the server could not find the requested resource (get pods dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f) Sep 29 11:45:04.770: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7765.svc.cluster.local from pod dns-7765/dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f: the server could not find the requested resource (get pods dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f) Sep 29 11:45:04.776: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7765.svc.cluster.local from pod dns-7765/dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f: the server could not find the requested resource (get pods dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f) Sep 29 11:45:04.782: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7765.svc.cluster.local from pod dns-7765/dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f: the server could not find the requested resource (get pods dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f) Sep 29 11:45:04.814: INFO: Unable to read jessie_udp@dns-test-service.dns-7765.svc.cluster.local from pod dns-7765/dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f: the server could not find the requested resource (get pods dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f) Sep 29 11:45:04.817: INFO: Unable to read jessie_tcp@dns-test-service.dns-7765.svc.cluster.local from pod dns-7765/dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f: the server could not find the requested resource (get pods dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f) Sep 29 11:45:04.820: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7765.svc.cluster.local from pod dns-7765/dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f: the server could not find the requested resource (get pods dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f) Sep 29 11:45:04.822: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7765.svc.cluster.local from pod dns-7765/dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f: the server could not find the requested resource (get pods dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f) Sep 29 11:45:04.841: INFO: Lookups using dns-7765/dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f failed for: [wheezy_udp@dns-test-service.dns-7765.svc.cluster.local wheezy_tcp@dns-test-service.dns-7765.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7765.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7765.svc.cluster.local jessie_udp@dns-test-service.dns-7765.svc.cluster.local jessie_tcp@dns-test-service.dns-7765.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7765.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7765.svc.cluster.local] Sep 29 11:45:09.774: INFO: Unable to read wheezy_udp@dns-test-service.dns-7765.svc.cluster.local from pod dns-7765/dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f: the server could not find the requested resource (get pods dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f) Sep 29 11:45:09.780: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7765.svc.cluster.local from pod dns-7765/dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f: the server could not find the requested resource (get pods dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f) Sep 29 11:45:09.795: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7765.svc.cluster.local from pod dns-7765/dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f: the server could not find the requested resource (get pods dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f) Sep 29 11:45:09.798: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7765.svc.cluster.local from pod dns-7765/dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f: the server could not find the requested resource (get pods dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f) Sep 29 11:45:10.039: INFO: Unable to read jessie_udp@dns-test-service.dns-7765.svc.cluster.local from pod dns-7765/dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f: the server could not find the requested resource (get pods dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f) Sep 29 11:45:10.084: INFO: Unable to read jessie_tcp@dns-test-service.dns-7765.svc.cluster.local from pod dns-7765/dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f: the server could not find the requested resource (get pods dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f) Sep 29 11:45:10.087: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7765.svc.cluster.local from pod dns-7765/dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f: the server could not find the requested resource (get pods dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f) Sep 29 11:45:10.132: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7765.svc.cluster.local from pod dns-7765/dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f: the server could not find the requested resource (get pods dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f) Sep 29 11:45:10.350: INFO: Lookups using dns-7765/dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f failed for: [wheezy_udp@dns-test-service.dns-7765.svc.cluster.local wheezy_tcp@dns-test-service.dns-7765.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7765.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7765.svc.cluster.local jessie_udp@dns-test-service.dns-7765.svc.cluster.local jessie_tcp@dns-test-service.dns-7765.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7765.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7765.svc.cluster.local] Sep 29 11:45:14.786: INFO: Unable to read wheezy_udp@dns-test-service.dns-7765.svc.cluster.local from pod dns-7765/dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f: the server could not find the requested resource (get pods dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f) Sep 29 11:45:14.802: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7765.svc.cluster.local from pod dns-7765/dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f: the server could not find the requested resource (get pods dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f) Sep 29 11:45:14.805: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7765.svc.cluster.local from pod dns-7765/dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f: the server could not find the requested resource (get pods dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f) Sep 29 11:45:14.827: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7765.svc.cluster.local from pod dns-7765/dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f: the server could not find the requested resource (get pods dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f) Sep 29 11:45:14.926: INFO: Unable to read jessie_udp@dns-test-service.dns-7765.svc.cluster.local from pod dns-7765/dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f: the server could not find the requested resource (get pods dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f) Sep 29 11:45:14.941: INFO: Unable to read jessie_tcp@dns-test-service.dns-7765.svc.cluster.local from pod dns-7765/dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f: the server could not find the requested resource (get pods dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f) Sep 29 11:45:14.944: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7765.svc.cluster.local from pod dns-7765/dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f: the server could not find the requested resource (get pods dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f) Sep 29 11:45:15.008: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7765.svc.cluster.local from pod dns-7765/dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f: the server could not find the requested resource (get pods dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f) Sep 29 11:45:15.160: INFO: Lookups using dns-7765/dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f failed for: [wheezy_udp@dns-test-service.dns-7765.svc.cluster.local wheezy_tcp@dns-test-service.dns-7765.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7765.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7765.svc.cluster.local jessie_udp@dns-test-service.dns-7765.svc.cluster.local jessie_tcp@dns-test-service.dns-7765.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7765.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7765.svc.cluster.local] Sep 29 11:45:19.771: INFO: Unable to read wheezy_udp@dns-test-service.dns-7765.svc.cluster.local from pod dns-7765/dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f: the server could not find the requested resource (get pods dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f) Sep 29 11:45:19.791: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7765.svc.cluster.local from pod dns-7765/dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f: the server could not find the requested resource (get pods dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f) Sep 29 11:45:19.794: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7765.svc.cluster.local from pod dns-7765/dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f: the server could not find the requested resource (get pods dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f) Sep 29 11:45:19.853: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7765.svc.cluster.local from pod dns-7765/dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f: the server could not find the requested resource (get pods dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f) Sep 29 11:45:20.006: INFO: Unable to read jessie_udp@dns-test-service.dns-7765.svc.cluster.local from pod dns-7765/dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f: the server could not find the requested resource (get pods dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f) Sep 29 11:45:20.009: INFO: Unable to read jessie_tcp@dns-test-service.dns-7765.svc.cluster.local from pod dns-7765/dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f: the server could not find the requested resource (get pods dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f) Sep 29 11:45:20.012: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7765.svc.cluster.local from pod dns-7765/dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f: the server could not find the requested resource (get pods dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f) Sep 29 11:45:20.015: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7765.svc.cluster.local from pod dns-7765/dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f: the server could not find the requested resource (get pods dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f) Sep 29 11:45:20.075: INFO: Lookups using dns-7765/dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f failed for: [wheezy_udp@dns-test-service.dns-7765.svc.cluster.local wheezy_tcp@dns-test-service.dns-7765.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7765.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7765.svc.cluster.local jessie_udp@dns-test-service.dns-7765.svc.cluster.local jessie_tcp@dns-test-service.dns-7765.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7765.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7765.svc.cluster.local] Sep 29 11:45:24.757: INFO: Unable to read wheezy_udp@dns-test-service.dns-7765.svc.cluster.local from pod dns-7765/dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f: the server could not find the requested resource (get pods dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f) Sep 29 11:45:24.761: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7765.svc.cluster.local from pod dns-7765/dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f: the server could not find the requested resource (get pods dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f) Sep 29 11:45:24.764: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7765.svc.cluster.local from pod dns-7765/dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f: the server could not find the requested resource (get pods dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f) Sep 29 11:45:24.767: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7765.svc.cluster.local from pod dns-7765/dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f: the server could not find the requested resource (get pods dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f) Sep 29 11:45:24.788: INFO: Unable to read jessie_udp@dns-test-service.dns-7765.svc.cluster.local from pod dns-7765/dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f: the server could not find the requested resource (get pods dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f) Sep 29 11:45:24.791: INFO: Unable to read jessie_tcp@dns-test-service.dns-7765.svc.cluster.local from pod dns-7765/dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f: the server could not find the requested resource (get pods dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f) Sep 29 11:45:24.794: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7765.svc.cluster.local from pod dns-7765/dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f: the server could not find the requested resource (get pods dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f) Sep 29 11:45:24.796: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7765.svc.cluster.local from pod dns-7765/dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f: the server could not find the requested resource (get pods dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f) Sep 29 11:45:24.814: INFO: Lookups using dns-7765/dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f failed for: [wheezy_udp@dns-test-service.dns-7765.svc.cluster.local wheezy_tcp@dns-test-service.dns-7765.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7765.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7765.svc.cluster.local jessie_udp@dns-test-service.dns-7765.svc.cluster.local jessie_tcp@dns-test-service.dns-7765.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7765.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7765.svc.cluster.local] Sep 29 11:45:29.817: INFO: DNS probes using dns-7765/dns-test-5452b2f6-47d1-493c-9b7c-2fa8198d813f succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:45:30.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7765" for this suite. • [SLOW TEST:37.628 seconds] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":303,"completed":222,"skipped":3508,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:45:30.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Sep 29 11:45:30.772: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 11:45:30.803: INFO: Number of nodes with available pods: 0 Sep 29 11:45:30.803: INFO: Node kali-worker is running more than one daemon pod Sep 29 11:45:31.817: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 11:45:31.820: INFO: Number of nodes with available pods: 0 Sep 29 11:45:31.820: INFO: Node kali-worker is running more than one daemon pod Sep 29 11:45:32.812: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 11:45:33.137: INFO: Number of nodes with available pods: 0 Sep 29 11:45:33.137: INFO: Node kali-worker is running more than one daemon pod Sep 29 11:45:33.828: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 11:45:33.853: INFO: Number of nodes with available pods: 0 Sep 29 11:45:33.853: INFO: Node kali-worker is running more than one daemon pod Sep 29 11:45:34.842: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 11:45:34.850: INFO: Number of nodes with available pods: 1 Sep 29 11:45:34.850: INFO: Node kali-worker2 is running more than one daemon pod Sep 29 11:45:35.825: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 11:45:35.832: INFO: Number of nodes with available pods: 2 Sep 29 11:45:35.832: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Sep 29 11:45:35.907: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 11:45:35.930: INFO: Number of nodes with available pods: 1 Sep 29 11:45:35.930: INFO: Node kali-worker is running more than one daemon pod Sep 29 11:45:36.954: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 11:45:36.958: INFO: Number of nodes with available pods: 1 Sep 29 11:45:36.958: INFO: Node kali-worker is running more than one daemon pod Sep 29 11:45:37.935: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 11:45:37.939: INFO: Number of nodes with available pods: 1 Sep 29 11:45:37.939: INFO: Node kali-worker is running more than one daemon pod Sep 29 11:45:38.936: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 11:45:38.940: INFO: Number of nodes with available pods: 1 Sep 29 11:45:38.940: INFO: Node kali-worker is running more than one daemon pod Sep 29 11:45:39.954: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Sep 29 11:45:39.957: INFO: Number of nodes with available pods: 2 Sep 29 11:45:39.957: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2221, will wait for the garbage collector to delete the pods Sep 29 11:45:40.021: INFO: Deleting DaemonSet.extensions daemon-set took: 6.674576ms Sep 29 11:45:40.422: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.24743ms Sep 29 11:45:43.540: INFO: Number of nodes with available pods: 0 Sep 29 11:45:43.540: INFO: Number of running nodes: 0, number of available pods: 0 Sep 29 11:45:43.545: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2221/daemonsets","resourceVersion":"1617132"},"items":null} Sep 29 11:45:43.548: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2221/pods","resourceVersion":"1617132"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:45:43.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2221" for this suite. • [SLOW TEST:12.908 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":303,"completed":223,"skipped":3534,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:45:43.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-2148.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-2148.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2148.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-2148.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-2148.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2148.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Sep 29 11:45:49.853: INFO: DNS probes using dns-2148/dns-test-a642ba01-bf93-436c-b5ed-4480e613f569 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:45:49.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2148" for this suite. • [SLOW TEST:6.532 seconds] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":303,"completed":224,"skipped":3559,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:45:50.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl label /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1333 STEP: creating the pod Sep 29 11:45:50.412: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1111' Sep 29 11:45:50.778: INFO: stderr: "" Sep 29 11:45:50.778: INFO: stdout: "pod/pause created\n" Sep 29 11:45:50.778: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Sep 29 11:45:50.778: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-1111" to be "running and ready" Sep 29 11:45:50.821: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 42.565938ms Sep 29 11:45:52.984: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205629257s Sep 29 11:45:55.001: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.223322826s Sep 29 11:45:55.002: INFO: Pod "pause" satisfied condition "running and ready" Sep 29 11:45:55.002: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: adding the label testing-label with value testing-label-value to a pod Sep 29 11:45:55.002: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-1111' Sep 29 11:45:55.144: INFO: stderr: "" Sep 29 11:45:55.144: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Sep 29 11:45:55.144: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1111' Sep 29 11:45:55.232: INFO: stderr: "" Sep 29 11:45:55.232: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" STEP: removing the label testing-label of a pod Sep 29 11:45:55.232: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-1111' Sep 29 11:45:55.456: INFO: stderr: "" Sep 29 11:45:55.456: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Sep 29 11:45:55.456: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1111' Sep 29 11:45:55.583: INFO: stderr: "" Sep 29 11:45:55.583: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] Kubectl label /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1340 STEP: using delete to clean up resources Sep 29 11:45:55.584: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1111' Sep 29 11:45:55.747: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 29 11:45:55.747: INFO: stdout: "pod \"pause\" force deleted\n" Sep 29 11:45:55.747: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-1111' Sep 29 11:45:55.865: INFO: stderr: "No resources found in kubectl-1111 namespace.\n" Sep 29 11:45:55.865: INFO: stdout: "" Sep 29 11:45:55.865: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-1111 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Sep 29 11:45:56.068: INFO: stderr: "" Sep 29 11:45:56.068: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:45:56.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1111" for this suite. • [SLOW TEST:5.990 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1330 should update the label on a resource [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":303,"completed":225,"skipped":3568,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:45:56.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium Sep 29 11:45:56.325: INFO: Waiting up to 5m0s for pod "pod-3e28c0d5-0c00-4d8e-a5e9-e2335c8c575d" in namespace "emptydir-9939" to be "Succeeded or Failed" Sep 29 11:45:56.335: INFO: Pod "pod-3e28c0d5-0c00-4d8e-a5e9-e2335c8c575d": Phase="Pending", Reason="", readiness=false. Elapsed: 9.858511ms Sep 29 11:45:58.397: INFO: Pod "pod-3e28c0d5-0c00-4d8e-a5e9-e2335c8c575d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072280264s Sep 29 11:46:00.409: INFO: Pod "pod-3e28c0d5-0c00-4d8e-a5e9-e2335c8c575d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.084557521s STEP: Saw pod success Sep 29 11:46:00.410: INFO: Pod "pod-3e28c0d5-0c00-4d8e-a5e9-e2335c8c575d" satisfied condition "Succeeded or Failed" Sep 29 11:46:00.412: INFO: Trying to get logs from node kali-worker pod pod-3e28c0d5-0c00-4d8e-a5e9-e2335c8c575d container test-container: STEP: delete the pod Sep 29 11:46:00.468: INFO: Waiting for pod pod-3e28c0d5-0c00-4d8e-a5e9-e2335c8c575d to disappear Sep 29 11:46:00.503: INFO: Pod pod-3e28c0d5-0c00-4d8e-a5e9-e2335c8c575d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:46:00.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9939" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":226,"skipped":3582,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] IngressClass API should support creating IngressClass API operations [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] IngressClass API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:46:00.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingressclass STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] IngressClass API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:148 [It] should support creating IngressClass API operations [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Sep 29 11:46:00.702: INFO: starting watch STEP: patching STEP: updating Sep 29 11:46:00.724: INFO: waiting for watch events with expected annotations Sep 29 11:46:00.725: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] IngressClass API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:46:00.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingressclass-4943" for this suite. •{"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":303,"completed":227,"skipped":3603,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:46:00.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Sep 29 11:46:00.865: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:46:08.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2432" for this suite. • [SLOW TEST:7.756 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":303,"completed":228,"skipped":3616,"failed":0} SSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:46:08.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Sep 29 11:46:08.621: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Sep 29 11:46:08.627: INFO: Waiting for terminating namespaces to be deleted... Sep 29 11:46:08.629: INFO: Logging pods the apiserver thinks is on node kali-worker before test Sep 29 11:46:08.633: INFO: pod-init-250b008f-03b1-45e6-ac28-8c75743a24bd from init-container-2432 started at 2020-09-29 11:46:01 +0000 UTC (1 container statuses recorded) Sep 29 11:46:08.633: INFO: Container run1 ready: true, restart count 0 Sep 29 11:46:08.633: INFO: kindnet-pdv4j from kube-system started at 2020-09-23 08:29:08 +0000 UTC (1 container statuses recorded) Sep 29 11:46:08.633: INFO: Container kindnet-cni ready: true, restart count 0 Sep 29 11:46:08.633: INFO: kube-proxy-qsqz8 from kube-system started at 2020-09-23 08:29:08 +0000 UTC (1 container statuses recorded) Sep 29 11:46:08.633: INFO: Container kube-proxy ready: true, restart count 0 Sep 29 11:46:08.633: INFO: Logging pods the apiserver thinks is on node kali-worker2 before test Sep 29 11:46:08.638: INFO: kindnet-pgjc7 from kube-system started at 2020-09-23 08:29:08 +0000 UTC (1 container statuses recorded) Sep 29 11:46:08.638: INFO: Container kindnet-cni ready: true, restart count 0 Sep 29 11:46:08.638: INFO: kube-proxy-qhsmg from kube-system started at 2020-09-23 08:29:08 +0000 UTC (1 container statuses recorded) Sep 29 11:46:08.638: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-0b8ccd25-c9d9-47ee-aec2-99f31f854592 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-0b8ccd25-c9d9-47ee-aec2-99f31f854592 off the node kali-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-0b8ccd25-c9d9-47ee-aec2-99f31f854592 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:46:16.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2586" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.289 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":303,"completed":229,"skipped":3627,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:46:16.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating pod Sep 29 11:46:20.946: INFO: Pod pod-hostip-6fcfbd82-8045-468f-a3ec-709a43ac458d has hostIP: 172.18.0.13 [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:46:20.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9035" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":303,"completed":230,"skipped":3635,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:46:20.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should create services for rc [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC Sep 29 11:46:21.002: INFO: namespace kubectl-1217 Sep 29 11:46:21.002: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1217' Sep 29 11:46:21.305: INFO: stderr: "" Sep 29 11:46:21.305: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Sep 29 11:46:22.535: INFO: Selector matched 1 pods for map[app:agnhost] Sep 29 11:46:22.535: INFO: Found 0 / 1 Sep 29 11:46:23.368: INFO: Selector matched 1 pods for map[app:agnhost] Sep 29 11:46:23.368: INFO: Found 0 / 1 Sep 29 11:46:24.409: INFO: Selector matched 1 pods for map[app:agnhost] Sep 29 11:46:24.409: INFO: Found 0 / 1 Sep 29 11:46:25.310: INFO: Selector matched 1 pods for map[app:agnhost] Sep 29 11:46:25.310: INFO: Found 1 / 1 Sep 29 11:46:25.310: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Sep 29 11:46:25.313: INFO: Selector matched 1 pods for map[app:agnhost] Sep 29 11:46:25.313: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Sep 29 11:46:25.313: INFO: wait on agnhost-primary startup in kubectl-1217 Sep 29 11:46:25.313: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config logs agnhost-primary-zmmv6 agnhost-primary --namespace=kubectl-1217' Sep 29 11:46:30.101: INFO: stderr: "" Sep 29 11:46:30.101: INFO: stdout: "Paused\n" STEP: exposing RC Sep 29 11:46:30.102: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-1217' Sep 29 11:46:30.247: INFO: stderr: "" Sep 29 11:46:30.247: INFO: stdout: "service/rm2 exposed\n" Sep 29 11:46:30.278: INFO: Service rm2 in namespace kubectl-1217 found. STEP: exposing service Sep 29 11:46:32.286: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-1217' Sep 29 11:46:32.463: INFO: stderr: "" Sep 29 11:46:32.463: INFO: stdout: "service/rm3 exposed\n" Sep 29 11:46:32.469: INFO: Service rm3 in namespace kubectl-1217 found. [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:46:34.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1217" for this suite. • [SLOW TEST:13.531 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1246 should create services for rc [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":303,"completed":231,"skipped":3688,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:46:34.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-0b4cec6a-b4fa-4ca0-9172-43dc08389815 STEP: Creating secret with name s-test-opt-upd-d7014349-5fa8-40ed-bfdb-cf0de431433a STEP: Creating the pod STEP: Deleting secret s-test-opt-del-0b4cec6a-b4fa-4ca0-9172-43dc08389815 STEP: Updating secret s-test-opt-upd-d7014349-5fa8-40ed-bfdb-cf0de431433a STEP: Creating secret with name s-test-opt-create-da6181b1-eba7-49b5-ac20-f5903fd4205d STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:48:11.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6053" for this suite. • [SLOW TEST:96.664 seconds] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":232,"skipped":3722,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:48:11.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Sep 29 11:48:11.232: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Sep 29 11:48:11.247: INFO: Waiting for terminating namespaces to be deleted... Sep 29 11:48:11.250: INFO: Logging pods the apiserver thinks is on node kali-worker before test Sep 29 11:48:11.256: INFO: kindnet-pdv4j from kube-system started at 2020-09-23 08:29:08 +0000 UTC (1 container statuses recorded) Sep 29 11:48:11.256: INFO: Container kindnet-cni ready: true, restart count 0 Sep 29 11:48:11.256: INFO: kube-proxy-qsqz8 from kube-system started at 2020-09-23 08:29:08 +0000 UTC (1 container statuses recorded) Sep 29 11:48:11.256: INFO: Container kube-proxy ready: true, restart count 0 Sep 29 11:48:11.256: INFO: pod-secrets-9423239b-f9eb-49a0-8659-4873cec37aa0 from secrets-6053 started at 2020-09-29 11:46:34 +0000 UTC (3 container statuses recorded) Sep 29 11:48:11.256: INFO: Container creates-volume-test ready: true, restart count 0 Sep 29 11:48:11.256: INFO: Container dels-volume-test ready: true, restart count 0 Sep 29 11:48:11.256: INFO: Container upds-volume-test ready: true, restart count 0 Sep 29 11:48:11.256: INFO: Logging pods the apiserver thinks is on node kali-worker2 before test Sep 29 11:48:11.261: INFO: kindnet-pgjc7 from kube-system started at 2020-09-23 08:29:08 +0000 UTC (1 container statuses recorded) Sep 29 11:48:11.261: INFO: Container kindnet-cni ready: true, restart count 0 Sep 29 11:48:11.261: INFO: kube-proxy-qhsmg from kube-system started at 2020-09-23 08:29:08 +0000 UTC (1 container statuses recorded) Sep 29 11:48:11.261: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-a5f8245f-1d1c-4848-8029-856e85d77acb 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-a5f8245f-1d1c-4848-8029-856e85d77acb off the node kali-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-a5f8245f-1d1c-4848-8029-856e85d77acb [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:48:27.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6564" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:16.354 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":303,"completed":233,"skipped":3737,"failed":0} SSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:48:27.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 29 11:48:27.572: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Sep 29 11:48:27.577: INFO: Number of nodes with available pods: 0 Sep 29 11:48:27.577: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Sep 29 11:48:27.654: INFO: Number of nodes with available pods: 0 Sep 29 11:48:27.654: INFO: Node kali-worker is running more than one daemon pod Sep 29 11:48:28.659: INFO: Number of nodes with available pods: 0 Sep 29 11:48:28.659: INFO: Node kali-worker is running more than one daemon pod Sep 29 11:48:29.658: INFO: Number of nodes with available pods: 0 Sep 29 11:48:29.659: INFO: Node kali-worker is running more than one daemon pod Sep 29 11:48:30.661: INFO: Number of nodes with available pods: 0 Sep 29 11:48:30.661: INFO: Node kali-worker is running more than one daemon pod Sep 29 11:48:31.659: INFO: Number of nodes with available pods: 1 Sep 29 11:48:31.659: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Sep 29 11:48:31.708: INFO: Number of nodes with available pods: 1 Sep 29 11:48:31.708: INFO: Number of running nodes: 0, number of available pods: 1 Sep 29 11:48:32.886: INFO: Number of nodes with available pods: 0 Sep 29 11:48:32.886: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Sep 29 11:48:33.135: INFO: Number of nodes with available pods: 0 Sep 29 11:48:33.135: INFO: Node kali-worker is running more than one daemon pod Sep 29 11:48:34.273: INFO: Number of nodes with available pods: 0 Sep 29 11:48:34.274: INFO: Node kali-worker is running more than one daemon pod Sep 29 11:48:35.138: INFO: Number of nodes with available pods: 0 Sep 29 11:48:35.138: INFO: Node kali-worker is running more than one daemon pod Sep 29 11:48:36.138: INFO: Number of nodes with available pods: 0 Sep 29 11:48:36.138: INFO: Node kali-worker is running more than one daemon pod Sep 29 11:48:37.140: INFO: Number of nodes with available pods: 0 Sep 29 11:48:37.140: INFO: Node kali-worker is running more than one daemon pod Sep 29 11:48:38.139: INFO: Number of nodes with available pods: 0 Sep 29 11:48:38.140: INFO: Node kali-worker is running more than one daemon pod Sep 29 11:48:39.140: INFO: Number of nodes with available pods: 0 Sep 29 11:48:39.140: INFO: Node kali-worker is running more than one daemon pod Sep 29 11:48:40.138: INFO: Number of nodes with available pods: 0 Sep 29 11:48:40.138: INFO: Node kali-worker is running more than one daemon pod Sep 29 11:48:41.141: INFO: Number of nodes with available pods: 0 Sep 29 11:48:41.141: INFO: Node kali-worker is running more than one daemon pod Sep 29 11:48:42.140: INFO: Number of nodes with available pods: 1 Sep 29 11:48:42.140: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-961, will wait for the garbage collector to delete the pods Sep 29 11:48:42.204: INFO: Deleting DaemonSet.extensions daemon-set took: 5.892029ms Sep 29 11:48:42.604: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.262469ms Sep 29 11:48:48.709: INFO: Number of nodes with available pods: 0 Sep 29 11:48:48.709: INFO: Number of running nodes: 0, number of available pods: 0 Sep 29 11:48:48.711: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-961/daemonsets","resourceVersion":"1618146"},"items":null} Sep 29 11:48:48.733: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-961/pods","resourceVersion":"1618147"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:48:48.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-961" for this suite. • [SLOW TEST:21.263 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":303,"completed":234,"skipped":3741,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Events /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:48:48.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-api-machinery] Events /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:48:48.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-9800" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":303,"completed":235,"skipped":3755,"failed":0} SSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:48:48.905: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from NodePort to ExternalName [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service nodeport-service with the type=NodePort in namespace services-2826 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-2826 STEP: creating replication controller externalsvc in namespace services-2826 I0929 11:48:49.155550 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-2826, replica count: 2 I0929 11:48:52.205975 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0929 11:48:55.206254 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Sep 29 11:48:55.265: INFO: Creating new exec pod Sep 29 11:48:59.323: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-2826 execpodjzcz9 -- /bin/sh -x -c nslookup nodeport-service.services-2826.svc.cluster.local' Sep 29 11:48:59.712: INFO: stderr: "I0929 11:48:59.589731 2697 log.go:181] (0xc000868dc0) (0xc00056eb40) Create stream\nI0929 11:48:59.589807 2697 log.go:181] (0xc000868dc0) (0xc00056eb40) Stream added, broadcasting: 1\nI0929 11:48:59.598428 2697 log.go:181] (0xc000868dc0) Reply frame received for 1\nI0929 11:48:59.598482 2697 log.go:181] (0xc000868dc0) (0xc000bc4000) Create stream\nI0929 11:48:59.598498 2697 log.go:181] (0xc000868dc0) (0xc000bc4000) Stream added, broadcasting: 3\nI0929 11:48:59.601196 2697 log.go:181] (0xc000868dc0) Reply frame received for 3\nI0929 11:48:59.601231 2697 log.go:181] (0xc000868dc0) (0xc000996820) Create stream\nI0929 11:48:59.601247 2697 log.go:181] (0xc000868dc0) (0xc000996820) Stream added, broadcasting: 5\nI0929 11:48:59.602018 2697 log.go:181] (0xc000868dc0) Reply frame received for 5\nI0929 11:48:59.691496 2697 log.go:181] (0xc000868dc0) Data frame received for 5\nI0929 11:48:59.691529 2697 log.go:181] (0xc000996820) (5) Data frame handling\nI0929 11:48:59.691549 2697 log.go:181] (0xc000996820) (5) Data frame sent\n+ nslookup nodeport-service.services-2826.svc.cluster.local\nI0929 11:48:59.702002 2697 log.go:181] (0xc000868dc0) Data frame received for 3\nI0929 11:48:59.702022 2697 log.go:181] (0xc000bc4000) (3) Data frame handling\nI0929 11:48:59.702033 2697 log.go:181] (0xc000bc4000) (3) Data frame sent\nI0929 11:48:59.703340 2697 log.go:181] (0xc000868dc0) Data frame received for 3\nI0929 11:48:59.703371 2697 log.go:181] (0xc000bc4000) (3) Data frame handling\nI0929 11:48:59.703392 2697 log.go:181] (0xc000bc4000) (3) Data frame sent\nI0929 11:48:59.703624 2697 log.go:181] (0xc000868dc0) Data frame received for 3\nI0929 11:48:59.703635 2697 log.go:181] (0xc000bc4000) (3) Data frame handling\nI0929 11:48:59.703963 2697 log.go:181] (0xc000868dc0) Data frame received for 5\nI0929 11:48:59.703982 2697 log.go:181] (0xc000996820) (5) Data frame handling\nI0929 11:48:59.706099 2697 log.go:181] (0xc000868dc0) Data frame received for 1\nI0929 11:48:59.706124 2697 log.go:181] (0xc00056eb40) (1) Data frame handling\nI0929 11:48:59.706150 2697 log.go:181] (0xc00056eb40) (1) Data frame sent\nI0929 11:48:59.706180 2697 log.go:181] (0xc000868dc0) (0xc00056eb40) Stream removed, broadcasting: 1\nI0929 11:48:59.706242 2697 log.go:181] (0xc000868dc0) Go away received\nI0929 11:48:59.706589 2697 log.go:181] (0xc000868dc0) (0xc00056eb40) Stream removed, broadcasting: 1\nI0929 11:48:59.706609 2697 log.go:181] (0xc000868dc0) (0xc000bc4000) Stream removed, broadcasting: 3\nI0929 11:48:59.706625 2697 log.go:181] (0xc000868dc0) (0xc000996820) Stream removed, broadcasting: 5\n" Sep 29 11:48:59.712: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-2826.svc.cluster.local\tcanonical name = externalsvc.services-2826.svc.cluster.local.\nName:\texternalsvc.services-2826.svc.cluster.local\nAddress: 10.97.117.117\n\n" STEP: deleting ReplicationController externalsvc in namespace services-2826, will wait for the garbage collector to delete the pods Sep 29 11:48:59.773: INFO: Deleting ReplicationController externalsvc took: 6.732364ms Sep 29 11:48:59.873: INFO: Terminating ReplicationController externalsvc pods took: 100.263494ms Sep 29 11:49:08.734: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:49:08.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2826" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:19.884 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":303,"completed":236,"skipped":3759,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:49:08.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Update Demo /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:308 [It] should scale a replication controller [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller Sep 29 11:49:08.833: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7284' Sep 29 11:49:09.100: INFO: stderr: "" Sep 29 11:49:09.100: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Sep 29 11:49:09.100: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7284' Sep 29 11:49:09.227: INFO: stderr: "" Sep 29 11:49:09.227: INFO: stdout: "update-demo-nautilus-4txn4 update-demo-nautilus-fvrfz " Sep 29 11:49:09.227: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4txn4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7284' Sep 29 11:49:09.323: INFO: stderr: "" Sep 29 11:49:09.323: INFO: stdout: "" Sep 29 11:49:09.324: INFO: update-demo-nautilus-4txn4 is created but not running Sep 29 11:49:14.324: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7284' Sep 29 11:49:14.436: INFO: stderr: "" Sep 29 11:49:14.436: INFO: stdout: "update-demo-nautilus-4txn4 update-demo-nautilus-fvrfz " Sep 29 11:49:14.436: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4txn4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7284' Sep 29 11:49:14.554: INFO: stderr: "" Sep 29 11:49:14.554: INFO: stdout: "" Sep 29 11:49:14.554: INFO: update-demo-nautilus-4txn4 is created but not running Sep 29 11:49:19.554: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7284' Sep 29 11:49:19.667: INFO: stderr: "" Sep 29 11:49:19.667: INFO: stdout: "update-demo-nautilus-4txn4 update-demo-nautilus-fvrfz " Sep 29 11:49:19.667: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4txn4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7284' Sep 29 11:49:19.756: INFO: stderr: "" Sep 29 11:49:19.756: INFO: stdout: "true" Sep 29 11:49:19.756: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4txn4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7284' Sep 29 11:49:19.850: INFO: stderr: "" Sep 29 11:49:19.850: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Sep 29 11:49:19.850: INFO: validating pod update-demo-nautilus-4txn4 Sep 29 11:49:19.854: INFO: got data: { "image": "nautilus.jpg" } Sep 29 11:49:19.854: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Sep 29 11:49:19.854: INFO: update-demo-nautilus-4txn4 is verified up and running Sep 29 11:49:19.854: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fvrfz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7284' Sep 29 11:49:19.958: INFO: stderr: "" Sep 29 11:49:19.958: INFO: stdout: "true" Sep 29 11:49:19.959: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fvrfz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7284' Sep 29 11:49:20.065: INFO: stderr: "" Sep 29 11:49:20.065: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Sep 29 11:49:20.066: INFO: validating pod update-demo-nautilus-fvrfz Sep 29 11:49:20.070: INFO: got data: { "image": "nautilus.jpg" } Sep 29 11:49:20.070: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Sep 29 11:49:20.070: INFO: update-demo-nautilus-fvrfz is verified up and running STEP: scaling down the replication controller Sep 29 11:49:20.073: INFO: scanned /root for discovery docs: Sep 29 11:49:20.074: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-7284' Sep 29 11:49:21.208: INFO: stderr: "" Sep 29 11:49:21.208: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Sep 29 11:49:21.208: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7284' Sep 29 11:49:21.314: INFO: stderr: "" Sep 29 11:49:21.314: INFO: stdout: "update-demo-nautilus-4txn4 update-demo-nautilus-fvrfz " STEP: Replicas for name=update-demo: expected=1 actual=2 Sep 29 11:49:26.314: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7284' Sep 29 11:49:26.420: INFO: stderr: "" Sep 29 11:49:26.420: INFO: stdout: "update-demo-nautilus-fvrfz " Sep 29 11:49:26.420: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fvrfz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7284' Sep 29 11:49:26.518: INFO: stderr: "" Sep 29 11:49:26.519: INFO: stdout: "true" Sep 29 11:49:26.519: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fvrfz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7284' Sep 29 11:49:26.619: INFO: stderr: "" Sep 29 11:49:26.619: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Sep 29 11:49:26.619: INFO: validating pod update-demo-nautilus-fvrfz Sep 29 11:49:26.623: INFO: got data: { "image": "nautilus.jpg" } Sep 29 11:49:26.623: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Sep 29 11:49:26.623: INFO: update-demo-nautilus-fvrfz is verified up and running STEP: scaling up the replication controller Sep 29 11:49:26.627: INFO: scanned /root for discovery docs: Sep 29 11:49:26.627: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-7284' Sep 29 11:49:27.755: INFO: stderr: "" Sep 29 11:49:27.755: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Sep 29 11:49:27.755: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7284' Sep 29 11:49:27.862: INFO: stderr: "" Sep 29 11:49:27.862: INFO: stdout: "update-demo-nautilus-cb99w update-demo-nautilus-fvrfz " Sep 29 11:49:27.862: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cb99w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7284' Sep 29 11:49:27.963: INFO: stderr: "" Sep 29 11:49:27.963: INFO: stdout: "" Sep 29 11:49:27.963: INFO: update-demo-nautilus-cb99w is created but not running Sep 29 11:49:32.964: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7284' Sep 29 11:49:33.073: INFO: stderr: "" Sep 29 11:49:33.073: INFO: stdout: "update-demo-nautilus-cb99w update-demo-nautilus-fvrfz " Sep 29 11:49:33.073: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cb99w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7284' Sep 29 11:49:33.177: INFO: stderr: "" Sep 29 11:49:33.177: INFO: stdout: "true" Sep 29 11:49:33.177: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cb99w -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7284' Sep 29 11:49:33.277: INFO: stderr: "" Sep 29 11:49:33.277: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Sep 29 11:49:33.277: INFO: validating pod update-demo-nautilus-cb99w Sep 29 11:49:33.280: INFO: got data: { "image": "nautilus.jpg" } Sep 29 11:49:33.280: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Sep 29 11:49:33.280: INFO: update-demo-nautilus-cb99w is verified up and running Sep 29 11:49:33.280: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fvrfz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7284' Sep 29 11:49:33.386: INFO: stderr: "" Sep 29 11:49:33.387: INFO: stdout: "true" Sep 29 11:49:33.387: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fvrfz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7284' Sep 29 11:49:33.507: INFO: stderr: "" Sep 29 11:49:33.507: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Sep 29 11:49:33.507: INFO: validating pod update-demo-nautilus-fvrfz Sep 29 11:49:33.510: INFO: got data: { "image": "nautilus.jpg" } Sep 29 11:49:33.511: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Sep 29 11:49:33.511: INFO: update-demo-nautilus-fvrfz is verified up and running STEP: using delete to clean up resources Sep 29 11:49:33.511: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7284' Sep 29 11:49:33.621: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Sep 29 11:49:33.621: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Sep 29 11:49:33.621: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7284' Sep 29 11:49:33.736: INFO: stderr: "No resources found in kubectl-7284 namespace.\n" Sep 29 11:49:33.736: INFO: stdout: "" Sep 29 11:49:33.736: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7284 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Sep 29 11:49:33.846: INFO: stderr: "" Sep 29 11:49:33.846: INFO: stdout: "update-demo-nautilus-cb99w\nupdate-demo-nautilus-fvrfz\n" Sep 29 11:49:34.346: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7284' Sep 29 11:49:34.473: INFO: stderr: "No resources found in kubectl-7284 namespace.\n" Sep 29 11:49:34.473: INFO: stdout: "" Sep 29 11:49:34.473: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7284 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Sep 29 11:49:34.598: INFO: stderr: "" Sep 29 11:49:34.598: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:49:34.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7284" for this suite. • [SLOW TEST:25.818 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:306 should scale a replication controller [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":303,"completed":237,"skipped":3763,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:49:34.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 29 11:49:34.808: INFO: Creating ReplicaSet my-hostname-basic-96ae893c-21cf-418b-a587-4c6a2d420047 Sep 29 11:49:34.878: INFO: Pod name my-hostname-basic-96ae893c-21cf-418b-a587-4c6a2d420047: Found 0 pods out of 1 Sep 29 11:49:39.883: INFO: Pod name my-hostname-basic-96ae893c-21cf-418b-a587-4c6a2d420047: Found 1 pods out of 1 Sep 29 11:49:39.883: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-96ae893c-21cf-418b-a587-4c6a2d420047" is running Sep 29 11:49:39.893: INFO: Pod "my-hostname-basic-96ae893c-21cf-418b-a587-4c6a2d420047-m9pmm" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-29 11:49:35 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-29 11:49:38 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-29 11:49:38 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-09-29 11:49:34 +0000 UTC Reason: Message:}]) Sep 29 11:49:39.894: INFO: Trying to dial the pod Sep 29 11:49:44.906: INFO: Controller my-hostname-basic-96ae893c-21cf-418b-a587-4c6a2d420047: Got expected result from replica 1 [my-hostname-basic-96ae893c-21cf-418b-a587-4c6a2d420047-m9pmm]: "my-hostname-basic-96ae893c-21cf-418b-a587-4c6a2d420047-m9pmm", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:49:44.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-666" for this suite. • [SLOW TEST:10.309 seconds] [sig-apps] ReplicaSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":303,"completed":238,"skipped":3790,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:49:44.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Sep 29 11:49:53.076: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Sep 29 11:49:53.083: INFO: Pod pod-with-poststart-http-hook still exists Sep 29 11:49:55.083: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Sep 29 11:49:55.088: INFO: Pod pod-with-poststart-http-hook still exists Sep 29 11:49:57.083: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Sep 29 11:49:57.088: INFO: Pod pod-with-poststart-http-hook still exists Sep 29 11:49:59.083: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Sep 29 11:49:59.087: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:49:59.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5571" for this suite. • [SLOW TEST:14.182 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":303,"completed":239,"skipped":3805,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:49:59.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:50:03.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6184" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":303,"completed":240,"skipped":3819,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:50:03.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs Sep 29 11:50:03.654: INFO: Waiting up to 5m0s for pod "pod-1b58e3a7-618f-4b77-a073-a0f230955148" in namespace "emptydir-4662" to be "Succeeded or Failed" Sep 29 11:50:03.660: INFO: Pod "pod-1b58e3a7-618f-4b77-a073-a0f230955148": Phase="Pending", Reason="", readiness=false. Elapsed: 5.627222ms Sep 29 11:50:05.808: INFO: Pod "pod-1b58e3a7-618f-4b77-a073-a0f230955148": Phase="Pending", Reason="", readiness=false. Elapsed: 2.153804767s Sep 29 11:50:07.813: INFO: Pod "pod-1b58e3a7-618f-4b77-a073-a0f230955148": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.158304631s STEP: Saw pod success Sep 29 11:50:07.813: INFO: Pod "pod-1b58e3a7-618f-4b77-a073-a0f230955148" satisfied condition "Succeeded or Failed" Sep 29 11:50:07.816: INFO: Trying to get logs from node kali-worker2 pod pod-1b58e3a7-618f-4b77-a073-a0f230955148 container test-container: STEP: delete the pod Sep 29 11:50:08.099: INFO: Waiting for pod pod-1b58e3a7-618f-4b77-a073-a0f230955148 to disappear Sep 29 11:50:08.102: INFO: Pod pod-1b58e3a7-618f-4b77-a073-a0f230955148 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:50:08.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4662" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":241,"skipped":3835,"failed":0} SSSSSS ------------------------------ [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-instrumentation] Events API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:50:08.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing events in all namespaces STEP: listing events in test namespace STEP: listing events with field selection filtering on source STEP: listing events with field selection filtering on reportingController STEP: getting the test event STEP: patching the test event STEP: getting the test event STEP: updating the test event STEP: getting the test event STEP: deleting the test event STEP: listing events in all namespaces STEP: listing events in test namespace [AfterEach] [sig-instrumentation] Events API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:50:08.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-1160" for this suite. •{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":303,"completed":242,"skipped":3841,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:50:08.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 29 11:50:08.500: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a92fbb2b-ed47-49e0-ad86-0fe5e100b6d4" in namespace "projected-288" to be "Succeeded or Failed" Sep 29 11:50:08.515: INFO: Pod "downwardapi-volume-a92fbb2b-ed47-49e0-ad86-0fe5e100b6d4": Phase="Pending", Reason="", readiness=false. Elapsed: 15.524613ms Sep 29 11:50:10.638: INFO: Pod "downwardapi-volume-a92fbb2b-ed47-49e0-ad86-0fe5e100b6d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.138414036s Sep 29 11:50:12.643: INFO: Pod "downwardapi-volume-a92fbb2b-ed47-49e0-ad86-0fe5e100b6d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.142890061s STEP: Saw pod success Sep 29 11:50:12.643: INFO: Pod "downwardapi-volume-a92fbb2b-ed47-49e0-ad86-0fe5e100b6d4" satisfied condition "Succeeded or Failed" Sep 29 11:50:12.646: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-a92fbb2b-ed47-49e0-ad86-0fe5e100b6d4 container client-container: STEP: delete the pod Sep 29 11:50:12.689: INFO: Waiting for pod downwardapi-volume-a92fbb2b-ed47-49e0-ad86-0fe5e100b6d4 to disappear Sep 29 11:50:12.726: INFO: Pod downwardapi-volume-a92fbb2b-ed47-49e0-ad86-0fe5e100b6d4 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:50:12.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-288" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":303,"completed":243,"skipped":3843,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:50:12.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Sep 29 11:50:20.980: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 29 11:50:20.987: INFO: Pod pod-with-prestop-exec-hook still exists Sep 29 11:50:22.988: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 29 11:50:22.993: INFO: Pod pod-with-prestop-exec-hook still exists Sep 29 11:50:24.988: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 29 11:50:24.992: INFO: Pod pod-with-prestop-exec-hook still exists Sep 29 11:50:26.988: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 29 11:50:26.993: INFO: Pod pod-with-prestop-exec-hook still exists Sep 29 11:50:28.988: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Sep 29 11:50:28.993: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:50:28.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8986" for this suite. • [SLOW TEST:16.272 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":303,"completed":244,"skipped":3865,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:50:29.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 29 11:50:29.737: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 29 11:50:31.748: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736977029, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736977029, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736977029, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736977029, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 29 11:50:33.751: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736977029, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736977029, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736977029, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736977029, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 29 11:50:36.782: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 29 11:50:36.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1539-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:50:37.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2918" for this suite. STEP: Destroying namespace "webhook-2918-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.025 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":303,"completed":245,"skipped":3872,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:50:38.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-6950 STEP: creating service affinity-clusterip in namespace services-6950 STEP: creating replication controller affinity-clusterip in namespace services-6950 I0929 11:50:38.150498 7 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-6950, replica count: 3 I0929 11:50:41.200984 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0929 11:50:44.201271 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 29 11:50:44.207: INFO: Creating new exec pod Sep 29 11:50:49.222: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-6950 execpod-affinity9ptvf -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Sep 29 11:50:49.445: INFO: stderr: "I0929 11:50:49.359395 3220 log.go:181] (0xc00029def0) (0xc000d2c8c0) Create stream\nI0929 11:50:49.359438 3220 log.go:181] (0xc00029def0) (0xc000d2c8c0) Stream added, broadcasting: 1\nI0929 11:50:49.367895 3220 log.go:181] (0xc00029def0) Reply frame received for 1\nI0929 11:50:49.367962 3220 log.go:181] (0xc00029def0) (0xc000d2c000) Create stream\nI0929 11:50:49.367986 3220 log.go:181] (0xc00029def0) (0xc000d2c000) Stream added, broadcasting: 3\nI0929 11:50:49.368984 3220 log.go:181] (0xc00029def0) Reply frame received for 3\nI0929 11:50:49.369012 3220 log.go:181] (0xc00029def0) (0xc000d2c0a0) Create stream\nI0929 11:50:49.369023 3220 log.go:181] (0xc00029def0) (0xc000d2c0a0) Stream added, broadcasting: 5\nI0929 11:50:49.369731 3220 log.go:181] (0xc00029def0) Reply frame received for 5\nI0929 11:50:49.439276 3220 log.go:181] (0xc00029def0) Data frame received for 5\nI0929 11:50:49.439376 3220 log.go:181] (0xc000d2c0a0) (5) Data frame handling\nI0929 11:50:49.439406 3220 log.go:181] (0xc000d2c0a0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip 80\nI0929 11:50:49.439676 3220 log.go:181] (0xc00029def0) Data frame received for 5\nI0929 11:50:49.439686 3220 log.go:181] (0xc000d2c0a0) (5) Data frame handling\nI0929 11:50:49.439692 3220 log.go:181] (0xc000d2c0a0) (5) Data frame sent\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\nI0929 11:50:49.439954 3220 log.go:181] (0xc00029def0) Data frame received for 5\nI0929 11:50:49.439963 3220 log.go:181] (0xc000d2c0a0) (5) Data frame handling\nI0929 11:50:49.440218 3220 log.go:181] (0xc00029def0) Data frame received for 3\nI0929 11:50:49.440228 3220 log.go:181] (0xc000d2c000) (3) Data frame handling\nI0929 11:50:49.441640 3220 log.go:181] (0xc00029def0) Data frame received for 1\nI0929 11:50:49.441661 3220 log.go:181] (0xc000d2c8c0) (1) Data frame handling\nI0929 11:50:49.441674 3220 log.go:181] (0xc000d2c8c0) (1) Data frame sent\nI0929 11:50:49.441688 3220 log.go:181] (0xc00029def0) (0xc000d2c8c0) Stream removed, broadcasting: 1\nI0929 11:50:49.441844 3220 log.go:181] (0xc00029def0) Go away received\nI0929 11:50:49.441967 3220 log.go:181] (0xc00029def0) (0xc000d2c8c0) Stream removed, broadcasting: 1\nI0929 11:50:49.441990 3220 log.go:181] (0xc00029def0) (0xc000d2c000) Stream removed, broadcasting: 3\nI0929 11:50:49.442002 3220 log.go:181] (0xc00029def0) (0xc000d2c0a0) Stream removed, broadcasting: 5\n" Sep 29 11:50:49.446: INFO: stdout: "" Sep 29 11:50:49.447: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-6950 execpod-affinity9ptvf -- /bin/sh -x -c nc -zv -t -w 2 10.108.59.129 80' Sep 29 11:50:49.666: INFO: stderr: "I0929 11:50:49.581599 3238 log.go:181] (0xc000669b80) (0xc000d0a640) Create stream\nI0929 11:50:49.581654 3238 log.go:181] (0xc000669b80) (0xc000d0a640) Stream added, broadcasting: 1\nI0929 11:50:49.587095 3238 log.go:181] (0xc000669b80) Reply frame received for 1\nI0929 11:50:49.587135 3238 log.go:181] (0xc000669b80) (0xc000d0a000) Create stream\nI0929 11:50:49.587146 3238 log.go:181] (0xc000669b80) (0xc000d0a000) Stream added, broadcasting: 3\nI0929 11:50:49.588124 3238 log.go:181] (0xc000669b80) Reply frame received for 3\nI0929 11:50:49.588162 3238 log.go:181] (0xc000669b80) (0xc0000cc1e0) Create stream\nI0929 11:50:49.588173 3238 log.go:181] (0xc000669b80) (0xc0000cc1e0) Stream added, broadcasting: 5\nI0929 11:50:49.589365 3238 log.go:181] (0xc000669b80) Reply frame received for 5\nI0929 11:50:49.658574 3238 log.go:181] (0xc000669b80) Data frame received for 3\nI0929 11:50:49.658626 3238 log.go:181] (0xc000669b80) Data frame received for 5\nI0929 11:50:49.658668 3238 log.go:181] (0xc0000cc1e0) (5) Data frame handling\nI0929 11:50:49.658691 3238 log.go:181] (0xc0000cc1e0) (5) Data frame sent\nI0929 11:50:49.658715 3238 log.go:181] (0xc000669b80) Data frame received for 5\nI0929 11:50:49.658735 3238 log.go:181] (0xc0000cc1e0) (5) Data frame handling\n+ nc -zv -t -w 2 10.108.59.129 80\nConnection to 10.108.59.129 80 port [tcp/http] succeeded!\nI0929 11:50:49.658778 3238 log.go:181] (0xc000d0a000) (3) Data frame handling\nI0929 11:50:49.660239 3238 log.go:181] (0xc000669b80) Data frame received for 1\nI0929 11:50:49.660260 3238 log.go:181] (0xc000d0a640) (1) Data frame handling\nI0929 11:50:49.660284 3238 log.go:181] (0xc000d0a640) (1) Data frame sent\nI0929 11:50:49.660316 3238 log.go:181] (0xc000669b80) (0xc000d0a640) Stream removed, broadcasting: 1\nI0929 11:50:49.660344 3238 log.go:181] (0xc000669b80) Go away received\nI0929 11:50:49.660998 3238 log.go:181] (0xc000669b80) (0xc000d0a640) Stream removed, broadcasting: 1\nI0929 11:50:49.661024 3238 log.go:181] (0xc000669b80) (0xc000d0a000) Stream removed, broadcasting: 3\nI0929 11:50:49.661044 3238 log.go:181] (0xc000669b80) (0xc0000cc1e0) Stream removed, broadcasting: 5\n" Sep 29 11:50:49.666: INFO: stdout: "" Sep 29 11:50:49.666: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-6950 execpod-affinity9ptvf -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.108.59.129:80/ ; done' Sep 29 11:50:49.979: INFO: stderr: "I0929 11:50:49.812532 3256 log.go:181] (0xc000e05550) (0xc0002faaa0) Create stream\nI0929 11:50:49.812585 3256 log.go:181] (0xc000e05550) (0xc0002faaa0) Stream added, broadcasting: 1\nI0929 11:50:49.815655 3256 log.go:181] (0xc000e05550) Reply frame received for 1\nI0929 11:50:49.815696 3256 log.go:181] (0xc000e05550) (0xc000ca00a0) Create stream\nI0929 11:50:49.815723 3256 log.go:181] (0xc000e05550) (0xc000ca00a0) Stream added, broadcasting: 3\nI0929 11:50:49.816659 3256 log.go:181] (0xc000e05550) Reply frame received for 3\nI0929 11:50:49.816681 3256 log.go:181] (0xc000e05550) (0xc0002fab40) Create stream\nI0929 11:50:49.816688 3256 log.go:181] (0xc000e05550) (0xc0002fab40) Stream added, broadcasting: 5\nI0929 11:50:49.817819 3256 log.go:181] (0xc000e05550) Reply frame received for 5\nI0929 11:50:49.882641 3256 log.go:181] (0xc000e05550) Data frame received for 5\nI0929 11:50:49.882667 3256 log.go:181] (0xc0002fab40) (5) Data frame handling\nI0929 11:50:49.882674 3256 log.go:181] (0xc0002fab40) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.59.129:80/\nI0929 11:50:49.882684 3256 log.go:181] (0xc000e05550) Data frame received for 3\nI0929 11:50:49.882689 3256 log.go:181] (0xc000ca00a0) (3) Data frame handling\nI0929 11:50:49.882694 3256 log.go:181] (0xc000ca00a0) (3) Data frame sent\nI0929 11:50:49.887202 3256 log.go:181] (0xc000e05550) Data frame received for 3\nI0929 11:50:49.887223 3256 log.go:181] (0xc000ca00a0) (3) Data frame handling\nI0929 11:50:49.887247 3256 log.go:181] (0xc000ca00a0) (3) Data frame sent\nI0929 11:50:49.887849 3256 log.go:181] (0xc000e05550) Data frame received for 3\nI0929 11:50:49.887877 3256 log.go:181] (0xc000ca00a0) (3) Data frame handling\nI0929 11:50:49.887889 3256 log.go:181] (0xc000ca00a0) (3) Data frame sent\nI0929 11:50:49.887902 3256 log.go:181] (0xc000e05550) Data frame received for 5\nI0929 11:50:49.887912 3256 log.go:181] (0xc0002fab40) (5) Data frame handling\nI0929 11:50:49.887920 3256 log.go:181] (0xc0002fab40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.59.129:80/\nI0929 11:50:49.895490 3256 log.go:181] (0xc000e05550) Data frame received for 3\nI0929 11:50:49.895514 3256 log.go:181] (0xc000ca00a0) (3) Data frame handling\nI0929 11:50:49.895538 3256 log.go:181] (0xc000ca00a0) (3) Data frame sent\nI0929 11:50:49.895822 3256 log.go:181] (0xc000e05550) Data frame received for 3\nI0929 11:50:49.895847 3256 log.go:181] (0xc000e05550) Data frame received for 5\nI0929 11:50:49.895872 3256 log.go:181] (0xc0002fab40) (5) Data frame handling\nI0929 11:50:49.895891 3256 log.go:181] (0xc0002fab40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.59.129:80/\nI0929 11:50:49.895905 3256 log.go:181] (0xc000ca00a0) (3) Data frame handling\nI0929 11:50:49.895911 3256 log.go:181] (0xc000ca00a0) (3) Data frame sent\nI0929 11:50:49.901436 3256 log.go:181] (0xc000e05550) Data frame received for 3\nI0929 11:50:49.901461 3256 log.go:181] (0xc000ca00a0) (3) Data frame handling\nI0929 11:50:49.901473 3256 log.go:181] (0xc000ca00a0) (3) Data frame sent\nI0929 11:50:49.901938 3256 log.go:181] (0xc000e05550) Data frame received for 3\nI0929 11:50:49.901958 3256 log.go:181] (0xc000ca00a0) (3) Data frame handling\nI0929 11:50:49.901966 3256 log.go:181] (0xc000ca00a0) (3) Data frame sent\nI0929 11:50:49.901975 3256 log.go:181] (0xc000e05550) Data frame received for 5\nI0929 11:50:49.901979 3256 log.go:181] (0xc0002fab40) (5) Data frame handling\nI0929 11:50:49.901985 3256 log.go:181] (0xc0002fab40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.59.129:80/\nI0929 11:50:49.908106 3256 log.go:181] (0xc000e05550) Data frame received for 3\nI0929 11:50:49.908134 3256 log.go:181] (0xc000ca00a0) (3) Data frame handling\nI0929 11:50:49.908159 3256 log.go:181] (0xc000ca00a0) (3) Data frame sent\nI0929 11:50:49.908695 3256 log.go:181] (0xc000e05550) Data frame received for 3\nI0929 11:50:49.908719 3256 log.go:181] (0xc000ca00a0) (3) Data frame handling\nI0929 11:50:49.908726 3256 log.go:181] (0xc000ca00a0) (3) Data frame sent\nI0929 11:50:49.908744 3256 log.go:181] (0xc000e05550) Data frame received for 5\nI0929 11:50:49.908771 3256 log.go:181] (0xc0002fab40) (5) Data frame handling\nI0929 11:50:49.908781 3256 log.go:181] (0xc0002fab40) (5) Data frame sent\nI0929 11:50:49.908789 3256 log.go:181] (0xc000e05550) Data frame received for 5\nI0929 11:50:49.908795 3256 log.go:181] (0xc0002fab40) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.59.129:80/\nI0929 11:50:49.908814 3256 log.go:181] (0xc0002fab40) (5) Data frame sent\nI0929 11:50:49.912910 3256 log.go:181] (0xc000e05550) Data frame received for 3\nI0929 11:50:49.912936 3256 log.go:181] (0xc000ca00a0) (3) Data frame handling\nI0929 11:50:49.912945 3256 log.go:181] (0xc000ca00a0) (3) Data frame sent\nI0929 11:50:49.914103 3256 log.go:181] (0xc000e05550) Data frame received for 5\nI0929 11:50:49.914127 3256 log.go:181] (0xc0002fab40) (5) Data frame handling\nI0929 11:50:49.914143 3256 log.go:181] (0xc0002fab40) (5) Data frame sent\n+ echo\nI0929 11:50:49.914933 3256 log.go:181] (0xc000e05550) Data frame received for 3\nI0929 11:50:49.914956 3256 log.go:181] (0xc000ca00a0) (3) Data frame handling\nI0929 11:50:49.914971 3256 log.go:181] (0xc000ca00a0) (3) Data frame sent\nI0929 11:50:49.915120 3256 log.go:181] (0xc000e05550) Data frame received for 5\nI0929 11:50:49.915134 3256 log.go:181] (0xc0002fab40) (5) Data frame handling\nI0929 11:50:49.915140 3256 log.go:181] (0xc0002fab40) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.108.59.129:80/\nI0929 11:50:49.919152 3256 log.go:181] (0xc000e05550) Data frame received for 3\nI0929 11:50:49.919170 3256 log.go:181] (0xc000ca00a0) (3) Data frame handling\nI0929 11:50:49.919199 3256 log.go:181] (0xc000ca00a0) (3) Data frame sent\nI0929 11:50:49.919640 3256 log.go:181] (0xc000e05550) Data frame received for 5\nI0929 11:50:49.919664 3256 log.go:181] (0xc000e05550) Data frame received for 3\nI0929 11:50:49.919688 3256 log.go:181] (0xc000ca00a0) (3) Data frame handling\nI0929 11:50:49.919700 3256 log.go:181] (0xc000ca00a0) (3) Data frame sent\nI0929 11:50:49.919709 3256 log.go:181] (0xc0002fab40) (5) Data frame handling\nI0929 11:50:49.919716 3256 log.go:181] (0xc0002fab40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.59.129:80/\nI0929 11:50:49.922774 3256 log.go:181] (0xc000e05550) Data frame received for 3\nI0929 11:50:49.922795 3256 log.go:181] (0xc000ca00a0) (3) Data frame handling\nI0929 11:50:49.922808 3256 log.go:181] (0xc000ca00a0) (3) Data frame sent\nI0929 11:50:49.923206 3256 log.go:181] (0xc000e05550) Data frame received for 5\nI0929 11:50:49.923218 3256 log.go:181] (0xc0002fab40) (5) Data frame handling\nI0929 11:50:49.923227 3256 log.go:181] (0xc0002fab40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeoutI0929 11:50:49.923257 3256 log.go:181] (0xc000e05550) Data frame received for 5\nI0929 11:50:49.923277 3256 log.go:181] (0xc0002fab40) (5) Data frame handling\nI0929 11:50:49.923282 3256 log.go:181] (0xc0002fab40) (5) Data frame sent\n 2 http://10.108.59.129:80/\nI0929 11:50:49.923288 3256 log.go:181] (0xc000e05550) Data frame received for 3\nI0929 11:50:49.923304 3256 log.go:181] (0xc000ca00a0) (3) Data frame handling\nI0929 11:50:49.923308 3256 log.go:181] (0xc000ca00a0) (3) Data frame sent\nI0929 11:50:49.927087 3256 log.go:181] (0xc000e05550) Data frame received for 3\nI0929 11:50:49.927101 3256 log.go:181] (0xc000ca00a0) (3) Data frame handling\nI0929 11:50:49.927110 3256 log.go:181] (0xc000ca00a0) (3) Data frame sent\nI0929 11:50:49.927494 3256 log.go:181] (0xc000e05550) Data frame received for 3\nI0929 11:50:49.927511 3256 log.go:181] (0xc000ca00a0) (3) Data frame handling\nI0929 11:50:49.927517 3256 log.go:181] (0xc000ca00a0) (3) Data frame sent\nI0929 11:50:49.927528 3256 log.go:181] (0xc000e05550) Data frame received for 5\nI0929 11:50:49.927533 3256 log.go:181] (0xc0002fab40) (5) Data frame handling\nI0929 11:50:49.927539 3256 log.go:181] (0xc0002fab40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.59.129:80/\nI0929 11:50:49.931896 3256 log.go:181] (0xc000e05550) Data frame received for 3\nI0929 11:50:49.931909 3256 log.go:181] (0xc000ca00a0) (3) Data frame handling\nI0929 11:50:49.931917 3256 log.go:181] (0xc000ca00a0) (3) Data frame sent\nI0929 11:50:49.932169 3256 log.go:181] (0xc000e05550) Data frame received for 3\nI0929 11:50:49.932180 3256 log.go:181] (0xc000ca00a0) (3) Data frame handling\nI0929 11:50:49.932187 3256 log.go:181] (0xc000ca00a0) (3) Data frame sent\nI0929 11:50:49.932198 3256 log.go:181] (0xc000e05550) Data frame received for 5\nI0929 11:50:49.932203 3256 log.go:181] (0xc0002fab40) (5) Data frame handling\nI0929 11:50:49.932209 3256 log.go:181] (0xc0002fab40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.59.129:80/\nI0929 11:50:49.935683 3256 log.go:181] (0xc000e05550) Data frame received for 3\nI0929 11:50:49.935700 3256 log.go:181] (0xc000ca00a0) (3) Data frame handling\nI0929 11:50:49.935711 3256 log.go:181] (0xc000ca00a0) (3) Data frame sent\nI0929 11:50:49.936142 3256 log.go:181] (0xc000e05550) Data frame received for 5\nI0929 11:50:49.936170 3256 log.go:181] (0xc0002fab40) (5) Data frame handling\nI0929 11:50:49.936182 3256 log.go:181] (0xc0002fab40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.59.129:80/\nI0929 11:50:49.936198 3256 log.go:181] (0xc000e05550) Data frame received for 3\nI0929 11:50:49.936216 3256 log.go:181] (0xc000ca00a0) (3) Data frame handling\nI0929 11:50:49.936226 3256 log.go:181] (0xc000ca00a0) (3) Data frame sent\nI0929 11:50:49.940298 3256 log.go:181] (0xc000e05550) Data frame received for 3\nI0929 11:50:49.940318 3256 log.go:181] (0xc000ca00a0) (3) Data frame handling\nI0929 11:50:49.940339 3256 log.go:181] (0xc000ca00a0) (3) Data frame sent\nI0929 11:50:49.941121 3256 log.go:181] (0xc000e05550) Data frame received for 3\nI0929 11:50:49.941140 3256 log.go:181] (0xc000ca00a0) (3) Data frame handling\nI0929 11:50:49.941152 3256 log.go:181] (0xc000e05550) Data frame received for 5\nI0929 11:50:49.941174 3256 log.go:181] (0xc0002fab40) (5) Data frame handling\nI0929 11:50:49.941185 3256 log.go:181] (0xc0002fab40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.59.129:80/\nI0929 11:50:49.941203 3256 log.go:181] (0xc000ca00a0) (3) Data frame sent\nI0929 11:50:49.945305 3256 log.go:181] (0xc000e05550) Data frame received for 3\nI0929 11:50:49.945321 3256 log.go:181] (0xc000ca00a0) (3) Data frame handling\nI0929 11:50:49.945334 3256 log.go:181] (0xc000ca00a0) (3) Data frame sent\nI0929 11:50:49.945701 3256 log.go:181] (0xc000e05550) Data frame received for 3\nI0929 11:50:49.945715 3256 log.go:181] (0xc000ca00a0) (3) Data frame handling\nI0929 11:50:49.945726 3256 log.go:181] (0xc000ca00a0) (3) Data frame sent\nI0929 11:50:49.945740 3256 log.go:181] (0xc000e05550) Data frame received for 5\nI0929 11:50:49.945747 3256 log.go:181] (0xc0002fab40) (5) Data frame handling\nI0929 11:50:49.945753 3256 log.go:181] (0xc0002fab40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.59.129:80/\nI0929 11:50:49.951384 3256 log.go:181] (0xc000e05550) Data frame received for 3\nI0929 11:50:49.951402 3256 log.go:181] (0xc000ca00a0) (3) Data frame handling\nI0929 11:50:49.951418 3256 log.go:181] (0xc000ca00a0) (3) Data frame sent\nI0929 11:50:49.952126 3256 log.go:181] (0xc000e05550) Data frame received for 3\nI0929 11:50:49.952146 3256 log.go:181] (0xc000ca00a0) (3) Data frame handling\nI0929 11:50:49.952156 3256 log.go:181] (0xc000ca00a0) (3) Data frame sent\nI0929 11:50:49.952173 3256 log.go:181] (0xc000e05550) Data frame received for 5\nI0929 11:50:49.952181 3256 log.go:181] (0xc0002fab40) (5) Data frame handling\nI0929 11:50:49.952192 3256 log.go:181] (0xc0002fab40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.59.129:80/\nI0929 11:50:49.958947 3256 log.go:181] (0xc000e05550) Data frame received for 3\nI0929 11:50:49.958972 3256 log.go:181] (0xc000ca00a0) (3) Data frame handling\nI0929 11:50:49.958988 3256 log.go:181] (0xc000ca00a0) (3) Data frame sent\nI0929 11:50:49.959563 3256 log.go:181] (0xc000e05550) Data frame received for 3\nI0929 11:50:49.959576 3256 log.go:181] (0xc000ca00a0) (3) Data frame handling\nI0929 11:50:49.959592 3256 log.go:181] (0xc000e05550) Data frame received for 5\nI0929 11:50:49.959619 3256 log.go:181] (0xc0002fab40) (5) Data frame handling\nI0929 11:50:49.959632 3256 log.go:181] (0xc0002fab40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.59.129:80/\nI0929 11:50:49.959649 3256 log.go:181] (0xc000ca00a0) (3) Data frame sent\nI0929 11:50:49.966225 3256 log.go:181] (0xc000e05550) Data frame received for 3\nI0929 11:50:49.966251 3256 log.go:181] (0xc000ca00a0) (3) Data frame handling\nI0929 11:50:49.966272 3256 log.go:181] (0xc000ca00a0) (3) Data frame sent\nI0929 11:50:49.966684 3256 log.go:181] (0xc000e05550) Data frame received for 3\nI0929 11:50:49.966712 3256 log.go:181] (0xc000ca00a0) (3) Data frame handling\nI0929 11:50:49.966724 3256 log.go:181] (0xc000ca00a0) (3) Data frame sent\nI0929 11:50:49.966739 3256 log.go:181] (0xc000e05550) Data frame received for 5\nI0929 11:50:49.966748 3256 log.go:181] (0xc0002fab40) (5) Data frame handling\nI0929 11:50:49.966756 3256 log.go:181] (0xc0002fab40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.59.129:80/\nI0929 11:50:49.971777 3256 log.go:181] (0xc000e05550) Data frame received for 3\nI0929 11:50:49.971801 3256 log.go:181] (0xc000ca00a0) (3) Data frame handling\nI0929 11:50:49.971821 3256 log.go:181] (0xc000ca00a0) (3) Data frame sent\nI0929 11:50:49.972364 3256 log.go:181] (0xc000e05550) Data frame received for 3\nI0929 11:50:49.972404 3256 log.go:181] (0xc000ca00a0) (3) Data frame handling\nI0929 11:50:49.972532 3256 log.go:181] (0xc000e05550) Data frame received for 5\nI0929 11:50:49.972553 3256 log.go:181] (0xc0002fab40) (5) Data frame handling\nI0929 11:50:49.973961 3256 log.go:181] (0xc000e05550) Data frame received for 1\nI0929 11:50:49.973977 3256 log.go:181] (0xc0002faaa0) (1) Data frame handling\nI0929 11:50:49.973996 3256 log.go:181] (0xc0002faaa0) (1) Data frame sent\nI0929 11:50:49.974039 3256 log.go:181] (0xc000e05550) (0xc0002faaa0) Stream removed, broadcasting: 1\nI0929 11:50:49.974178 3256 log.go:181] (0xc000e05550) Go away received\nI0929 11:50:49.974483 3256 log.go:181] (0xc000e05550) (0xc0002faaa0) Stream removed, broadcasting: 1\nI0929 11:50:49.974508 3256 log.go:181] (0xc000e05550) (0xc000ca00a0) Stream removed, broadcasting: 3\nI0929 11:50:49.974525 3256 log.go:181] (0xc000e05550) (0xc0002fab40) Stream removed, broadcasting: 5\n" Sep 29 11:50:49.979: INFO: stdout: "\naffinity-clusterip-kkswt\naffinity-clusterip-kkswt\naffinity-clusterip-kkswt\naffinity-clusterip-kkswt\naffinity-clusterip-kkswt\naffinity-clusterip-kkswt\naffinity-clusterip-kkswt\naffinity-clusterip-kkswt\naffinity-clusterip-kkswt\naffinity-clusterip-kkswt\naffinity-clusterip-kkswt\naffinity-clusterip-kkswt\naffinity-clusterip-kkswt\naffinity-clusterip-kkswt\naffinity-clusterip-kkswt\naffinity-clusterip-kkswt" Sep 29 11:50:49.979: INFO: Received response from host: affinity-clusterip-kkswt Sep 29 11:50:49.979: INFO: Received response from host: affinity-clusterip-kkswt Sep 29 11:50:49.979: INFO: Received response from host: affinity-clusterip-kkswt Sep 29 11:50:49.979: INFO: Received response from host: affinity-clusterip-kkswt Sep 29 11:50:49.979: INFO: Received response from host: affinity-clusterip-kkswt Sep 29 11:50:49.979: INFO: Received response from host: affinity-clusterip-kkswt Sep 29 11:50:49.979: INFO: Received response from host: affinity-clusterip-kkswt Sep 29 11:50:49.979: INFO: Received response from host: affinity-clusterip-kkswt Sep 29 11:50:49.979: INFO: Received response from host: affinity-clusterip-kkswt Sep 29 11:50:49.979: INFO: Received response from host: affinity-clusterip-kkswt Sep 29 11:50:49.979: INFO: Received response from host: affinity-clusterip-kkswt Sep 29 11:50:49.979: INFO: Received response from host: affinity-clusterip-kkswt Sep 29 11:50:49.979: INFO: Received response from host: affinity-clusterip-kkswt Sep 29 11:50:49.979: INFO: Received response from host: affinity-clusterip-kkswt Sep 29 11:50:49.979: INFO: Received response from host: affinity-clusterip-kkswt Sep 29 11:50:49.979: INFO: Received response from host: affinity-clusterip-kkswt Sep 29 11:50:49.979: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-6950, will wait for the garbage collector to delete the pods Sep 29 11:50:50.186: INFO: Deleting ReplicationController affinity-clusterip took: 99.174936ms Sep 29 11:50:50.486: INFO: Terminating ReplicationController affinity-clusterip pods took: 300.165661ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:51:08.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6950" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:30.293 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":303,"completed":246,"skipped":3891,"failed":0} SS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:51:08.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0929 11:51:20.339531 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Sep 29 11:52:22.371: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Sep 29 11:52:22.371: INFO: Deleting pod "simpletest-rc-to-be-deleted-66fb5" in namespace "gc-9873" Sep 29 11:52:22.455: INFO: Deleting pod "simpletest-rc-to-be-deleted-6j7hx" in namespace "gc-9873" Sep 29 11:52:22.531: INFO: Deleting pod "simpletest-rc-to-be-deleted-7hg9l" in namespace "gc-9873" Sep 29 11:52:22.861: INFO: Deleting pod "simpletest-rc-to-be-deleted-fj5p6" in namespace "gc-9873" Sep 29 11:52:23.030: INFO: Deleting pod "simpletest-rc-to-be-deleted-lq2tk" in namespace "gc-9873" [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:52:23.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9873" for this suite. • [SLOW TEST:75.246 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":303,"completed":247,"skipped":3893,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:52:23.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Sep 29 11:52:28.179: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:52:28.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5973" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":303,"completed":248,"skipped":3959,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:52:28.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Sep 29 11:52:32.903: INFO: Successfully updated pod "labelsupdate1ca46fae-a9fd-4c30-8a94-3fc445d1b2b6" [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:52:36.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6073" for this suite. • [SLOW TEST:8.709 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":303,"completed":249,"skipped":4006,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:52:36.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 29 11:52:37.050: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f48eb4b3-78ed-423f-b7c6-a8924f9059fa" in namespace "downward-api-4266" to be "Succeeded or Failed" Sep 29 11:52:37.054: INFO: Pod "downwardapi-volume-f48eb4b3-78ed-423f-b7c6-a8924f9059fa": Phase="Pending", Reason="", readiness=false. Elapsed: 3.857489ms Sep 29 11:52:39.062: INFO: Pod "downwardapi-volume-f48eb4b3-78ed-423f-b7c6-a8924f9059fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012534095s Sep 29 11:52:41.066: INFO: Pod "downwardapi-volume-f48eb4b3-78ed-423f-b7c6-a8924f9059fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015999974s STEP: Saw pod success Sep 29 11:52:41.066: INFO: Pod "downwardapi-volume-f48eb4b3-78ed-423f-b7c6-a8924f9059fa" satisfied condition "Succeeded or Failed" Sep 29 11:52:41.069: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-f48eb4b3-78ed-423f-b7c6-a8924f9059fa container client-container: STEP: delete the pod Sep 29 11:52:41.081: INFO: Waiting for pod downwardapi-volume-f48eb4b3-78ed-423f-b7c6-a8924f9059fa to disappear Sep 29 11:52:41.086: INFO: Pod downwardapi-volume-f48eb4b3-78ed-423f-b7c6-a8924f9059fa no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:52:41.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4266" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":250,"skipped":4019,"failed":0} ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:52:41.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 29 11:52:42.058: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 29 11:52:44.070: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736977162, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736977162, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736977162, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736977162, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 29 11:52:47.125: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 29 11:52:47.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6242-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:52:48.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4403" for this suite. STEP: Destroying namespace "webhook-4403-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.210 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":303,"completed":251,"skipped":4019,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:52:48.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-72f6ab9c-ed45-4072-accb-0606cd8fed36 STEP: Creating a pod to test consume configMaps Sep 29 11:52:48.422: INFO: Waiting up to 5m0s for pod "pod-configmaps-18104aa8-7673-4981-bc00-fb5557b0a442" in namespace "configmap-4405" to be "Succeeded or Failed" Sep 29 11:52:48.434: INFO: Pod "pod-configmaps-18104aa8-7673-4981-bc00-fb5557b0a442": Phase="Pending", Reason="", readiness=false. Elapsed: 12.732035ms Sep 29 11:52:50.438: INFO: Pod "pod-configmaps-18104aa8-7673-4981-bc00-fb5557b0a442": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016816462s Sep 29 11:52:52.442: INFO: Pod "pod-configmaps-18104aa8-7673-4981-bc00-fb5557b0a442": Phase="Running", Reason="", readiness=true. Elapsed: 4.020124499s Sep 29 11:52:54.445: INFO: Pod "pod-configmaps-18104aa8-7673-4981-bc00-fb5557b0a442": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.023738287s STEP: Saw pod success Sep 29 11:52:54.445: INFO: Pod "pod-configmaps-18104aa8-7673-4981-bc00-fb5557b0a442" satisfied condition "Succeeded or Failed" Sep 29 11:52:54.449: INFO: Trying to get logs from node kali-worker pod pod-configmaps-18104aa8-7673-4981-bc00-fb5557b0a442 container configmap-volume-test: STEP: delete the pod Sep 29 11:52:54.534: INFO: Waiting for pod pod-configmaps-18104aa8-7673-4981-bc00-fb5557b0a442 to disappear Sep 29 11:52:54.542: INFO: Pod pod-configmaps-18104aa8-7673-4981-bc00-fb5557b0a442 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:52:54.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4405" for this suite. • [SLOW TEST:6.220 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":303,"completed":252,"skipped":4032,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:52:54.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Sep 29 11:52:54.608: INFO: PodSpec: initContainers in spec.initContainers Sep 29 11:53:48.643: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-213c8888-8299-46a4-9d26-6c72fb98206b", GenerateName:"", Namespace:"init-container-302", SelfLink:"/api/v1/namespaces/init-container-302/pods/pod-init-213c8888-8299-46a4-9d26-6c72fb98206b", UID:"4ab8f4f2-14a2-4f43-a8d3-498f49bdd233", ResourceVersion:"1620117", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63736977174, loc:(*time.Location)(0x7701840)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"608668627"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0071b8040), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0071b8060)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0071b8080), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0071b80a0)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-5nwzk", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0054fc000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-5nwzk", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-5nwzk", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-5nwzk", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc004a08098), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kali-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001696000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc004a081a0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc004a081c0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc004a081c8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc004a081cc), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc004f70190), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736977174, loc:(*time.Location)(0x7701840)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736977174, loc:(*time.Location)(0x7701840)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736977174, loc:(*time.Location)(0x7701840)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736977174, loc:(*time.Location)(0x7701840)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.12", PodIP:"10.244.2.168", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.168"}}, StartTime:(*v1.Time)(0xc0071b80c0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0016964d0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001696540)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://bc8f7d0da182342b7269e49bb6664a5db7f7682880fc6eed4f74ce2af147e406", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0071b8100), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0071b80e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc004a0824f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:53:48.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-302" for this suite. • [SLOW TEST:54.128 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":303,"completed":253,"skipped":4061,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:53:48.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating server pod server in namespace prestop-5796 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-5796 STEP: Deleting pre-stop pod Sep 29 11:54:01.898: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:54:01.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-5796" for this suite. • [SLOW TEST:13.281 seconds] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should call prestop when killing a pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":303,"completed":254,"skipped":4079,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should test the lifecycle of an Endpoint [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:54:01.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should test the lifecycle of an Endpoint [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating an Endpoint STEP: waiting for available Endpoint STEP: listing all Endpoints STEP: updating the Endpoint STEP: fetching the Endpoint STEP: patching the Endpoint STEP: fetching the Endpoint STEP: deleting the Endpoint by Collection STEP: waiting for Endpoint deletion STEP: fetching the Endpoint [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:54:02.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7503" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 •{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":303,"completed":255,"skipped":4117,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:54:02.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 29 11:54:03.258: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 29 11:54:05.269: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736977243, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736977243, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736977243, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736977243, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 29 11:54:08.325: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:54:08.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4324" for this suite. STEP: Destroying namespace "webhook-4324-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.036 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":303,"completed":256,"skipped":4134,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:54:08.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name secret-emptykey-test-e403b2d8-b603-4fca-ae49-1dd661512160 [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:54:08.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7256" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":303,"completed":257,"skipped":4198,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:54:08.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs Sep 29 11:54:08.660: INFO: Waiting up to 5m0s for pod "pod-7903955c-e579-417c-a0a6-e1cd5ad2355a" in namespace "emptydir-1178" to be "Succeeded or Failed" Sep 29 11:54:08.667: INFO: Pod "pod-7903955c-e579-417c-a0a6-e1cd5ad2355a": Phase="Pending", Reason="", readiness=false. Elapsed: 7.577713ms Sep 29 11:54:10.671: INFO: Pod "pod-7903955c-e579-417c-a0a6-e1cd5ad2355a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011713003s Sep 29 11:54:12.675: INFO: Pod "pod-7903955c-e579-417c-a0a6-e1cd5ad2355a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015674845s STEP: Saw pod success Sep 29 11:54:12.675: INFO: Pod "pod-7903955c-e579-417c-a0a6-e1cd5ad2355a" satisfied condition "Succeeded or Failed" Sep 29 11:54:12.679: INFO: Trying to get logs from node kali-worker pod pod-7903955c-e579-417c-a0a6-e1cd5ad2355a container test-container: STEP: delete the pod Sep 29 11:54:12.711: INFO: Waiting for pod pod-7903955c-e579-417c-a0a6-e1cd5ad2355a to disappear Sep 29 11:54:12.715: INFO: Pod pod-7903955c-e579-417c-a0a6-e1cd5ad2355a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:54:12.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1178" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":258,"skipped":4214,"failed":0} SSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:54:12.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-2833 STEP: creating a selector STEP: Creating the service pods in kubernetes Sep 29 11:54:12.767: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Sep 29 11:54:12.872: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Sep 29 11:54:14.876: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Sep 29 11:54:16.876: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 29 11:54:18.876: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 29 11:54:20.876: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 29 11:54:22.876: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 29 11:54:24.876: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 29 11:54:26.876: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 29 11:54:28.876: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 29 11:54:30.877: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 29 11:54:32.876: INFO: The status of Pod netserver-0 is Running (Ready = true) Sep 29 11:54:32.881: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Sep 29 11:54:36.959: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.171 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2833 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 29 11:54:36.959: INFO: >>> kubeConfig: /root/.kube/config I0929 11:54:36.997773 7 log.go:181] (0xc003ab09a0) (0xc001a7a3c0) Create stream I0929 11:54:36.997831 7 log.go:181] (0xc003ab09a0) (0xc001a7a3c0) Stream added, broadcasting: 1 I0929 11:54:37.000285 7 log.go:181] (0xc003ab09a0) Reply frame received for 1 I0929 11:54:37.000338 7 log.go:181] (0xc003ab09a0) (0xc000150e60) Create stream I0929 11:54:37.000353 7 log.go:181] (0xc003ab09a0) (0xc000150e60) Stream added, broadcasting: 3 I0929 11:54:37.001289 7 log.go:181] (0xc003ab09a0) Reply frame received for 3 I0929 11:54:37.001328 7 log.go:181] (0xc003ab09a0) (0xc003c7ad20) Create stream I0929 11:54:37.001339 7 log.go:181] (0xc003ab09a0) (0xc003c7ad20) Stream added, broadcasting: 5 I0929 11:54:37.002294 7 log.go:181] (0xc003ab09a0) Reply frame received for 5 I0929 11:54:38.076409 7 log.go:181] (0xc003ab09a0) Data frame received for 3 I0929 11:54:38.076469 7 log.go:181] (0xc003ab09a0) Data frame received for 5 I0929 11:54:38.076507 7 log.go:181] (0xc003c7ad20) (5) Data frame handling I0929 11:54:38.076602 7 log.go:181] (0xc000150e60) (3) Data frame handling I0929 11:54:38.076692 7 log.go:181] (0xc000150e60) (3) Data frame sent I0929 11:54:38.076727 7 log.go:181] (0xc003ab09a0) Data frame received for 3 I0929 11:54:38.076745 7 log.go:181] (0xc000150e60) (3) Data frame handling I0929 11:54:38.079387 7 log.go:181] (0xc003ab09a0) Data frame received for 1 I0929 11:54:38.079478 7 log.go:181] (0xc001a7a3c0) (1) Data frame handling I0929 11:54:38.079531 7 log.go:181] (0xc001a7a3c0) (1) Data frame sent I0929 11:54:38.079562 7 log.go:181] (0xc003ab09a0) (0xc001a7a3c0) Stream removed, broadcasting: 1 I0929 11:54:38.079609 7 log.go:181] (0xc003ab09a0) Go away received I0929 11:54:38.079714 7 log.go:181] (0xc003ab09a0) (0xc001a7a3c0) Stream removed, broadcasting: 1 I0929 11:54:38.079750 7 log.go:181] (0xc003ab09a0) (0xc000150e60) Stream removed, broadcasting: 3 I0929 11:54:38.079781 7 log.go:181] (0xc003ab09a0) (0xc003c7ad20) Stream removed, broadcasting: 5 Sep 29 11:54:38.079: INFO: Found all expected endpoints: [netserver-0] Sep 29 11:54:38.083: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.170 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2833 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 29 11:54:38.083: INFO: >>> kubeConfig: /root/.kube/config I0929 11:54:38.148086 7 log.go:181] (0xc003ab1080) (0xc001a7a960) Create stream I0929 11:54:38.148118 7 log.go:181] (0xc003ab1080) (0xc001a7a960) Stream added, broadcasting: 1 I0929 11:54:38.149851 7 log.go:181] (0xc003ab1080) Reply frame received for 1 I0929 11:54:38.149878 7 log.go:181] (0xc003ab1080) (0xc00226dae0) Create stream I0929 11:54:38.149890 7 log.go:181] (0xc003ab1080) (0xc00226dae0) Stream added, broadcasting: 3 I0929 11:54:38.150933 7 log.go:181] (0xc003ab1080) Reply frame received for 3 I0929 11:54:38.150987 7 log.go:181] (0xc003ab1080) (0xc0032c4500) Create stream I0929 11:54:38.151002 7 log.go:181] (0xc003ab1080) (0xc0032c4500) Stream added, broadcasting: 5 I0929 11:54:38.151799 7 log.go:181] (0xc003ab1080) Reply frame received for 5 I0929 11:54:39.230170 7 log.go:181] (0xc003ab1080) Data frame received for 3 I0929 11:54:39.230206 7 log.go:181] (0xc00226dae0) (3) Data frame handling I0929 11:54:39.230240 7 log.go:181] (0xc00226dae0) (3) Data frame sent I0929 11:54:39.230419 7 log.go:181] (0xc003ab1080) Data frame received for 5 I0929 11:54:39.230444 7 log.go:181] (0xc0032c4500) (5) Data frame handling I0929 11:54:39.230500 7 log.go:181] (0xc003ab1080) Data frame received for 3 I0929 11:54:39.230534 7 log.go:181] (0xc00226dae0) (3) Data frame handling I0929 11:54:39.232430 7 log.go:181] (0xc003ab1080) Data frame received for 1 I0929 11:54:39.232530 7 log.go:181] (0xc001a7a960) (1) Data frame handling I0929 11:54:39.232603 7 log.go:181] (0xc001a7a960) (1) Data frame sent I0929 11:54:39.232647 7 log.go:181] (0xc003ab1080) (0xc001a7a960) Stream removed, broadcasting: 1 I0929 11:54:39.232694 7 log.go:181] (0xc003ab1080) Go away received I0929 11:54:39.232946 7 log.go:181] (0xc003ab1080) (0xc001a7a960) Stream removed, broadcasting: 1 I0929 11:54:39.232981 7 log.go:181] (0xc003ab1080) (0xc00226dae0) Stream removed, broadcasting: 3 I0929 11:54:39.232994 7 log.go:181] (0xc003ab1080) (0xc0032c4500) Stream removed, broadcasting: 5 Sep 29 11:54:39.233: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:54:39.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2833" for this suite. • [SLOW TEST:26.523 seconds] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":259,"skipped":4222,"failed":0} SSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:54:39.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Sep 29 11:54:39.366: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:54:58.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9139" for this suite. • [SLOW TEST:18.873 seconds] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":303,"completed":260,"skipped":4225,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:54:58.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 29 11:54:58.193: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Sep 29 11:55:01.162: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2729 create -f -' Sep 29 11:55:04.662: INFO: stderr: "" Sep 29 11:55:04.663: INFO: stdout: "e2e-test-crd-publish-openapi-6522-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Sep 29 11:55:04.663: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2729 delete e2e-test-crd-publish-openapi-6522-crds test-cr' Sep 29 11:55:04.783: INFO: stderr: "" Sep 29 11:55:04.783: INFO: stdout: "e2e-test-crd-publish-openapi-6522-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Sep 29 11:55:04.783: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2729 apply -f -' Sep 29 11:55:05.085: INFO: stderr: "" Sep 29 11:55:05.085: INFO: stdout: "e2e-test-crd-publish-openapi-6522-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Sep 29 11:55:05.085: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2729 delete e2e-test-crd-publish-openapi-6522-crds test-cr' Sep 29 11:55:05.192: INFO: stderr: "" Sep 29 11:55:05.192: INFO: stdout: "e2e-test-crd-publish-openapi-6522-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Sep 29 11:55:05.192: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6522-crds' Sep 29 11:55:05.481: INFO: stderr: "" Sep 29 11:55:05.481: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6522-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:55:08.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2729" for this suite. • [SLOW TEST:10.298 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":303,"completed":261,"skipped":4228,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:55:08.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs Sep 29 11:55:08.493: INFO: Waiting up to 5m0s for pod "pod-b746eb1c-7e29-4b22-ace6-a5f3b31a40f9" in namespace "emptydir-9791" to be "Succeeded or Failed" Sep 29 11:55:08.539: INFO: Pod "pod-b746eb1c-7e29-4b22-ace6-a5f3b31a40f9": Phase="Pending", Reason="", readiness=false. Elapsed: 45.66026ms Sep 29 11:55:10.543: INFO: Pod "pod-b746eb1c-7e29-4b22-ace6-a5f3b31a40f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049585168s Sep 29 11:55:12.547: INFO: Pod "pod-b746eb1c-7e29-4b22-ace6-a5f3b31a40f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053847141s STEP: Saw pod success Sep 29 11:55:12.547: INFO: Pod "pod-b746eb1c-7e29-4b22-ace6-a5f3b31a40f9" satisfied condition "Succeeded or Failed" Sep 29 11:55:12.550: INFO: Trying to get logs from node kali-worker pod pod-b746eb1c-7e29-4b22-ace6-a5f3b31a40f9 container test-container: STEP: delete the pod Sep 29 11:55:12.568: INFO: Waiting for pod pod-b746eb1c-7e29-4b22-ace6-a5f3b31a40f9 to disappear Sep 29 11:55:12.573: INFO: Pod pod-b746eb1c-7e29-4b22-ace6-a5f3b31a40f9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:55:12.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9791" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":262,"skipped":4259,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:55:12.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 29 11:55:12.686: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a29515b9-d23a-4107-8b30-c44ea6fe8868" in namespace "projected-4670" to be "Succeeded or Failed" Sep 29 11:55:12.688: INFO: Pod "downwardapi-volume-a29515b9-d23a-4107-8b30-c44ea6fe8868": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022688ms Sep 29 11:55:14.692: INFO: Pod "downwardapi-volume-a29515b9-d23a-4107-8b30-c44ea6fe8868": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006111849s Sep 29 11:55:16.695: INFO: Pod "downwardapi-volume-a29515b9-d23a-4107-8b30-c44ea6fe8868": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009661264s STEP: Saw pod success Sep 29 11:55:16.695: INFO: Pod "downwardapi-volume-a29515b9-d23a-4107-8b30-c44ea6fe8868" satisfied condition "Succeeded or Failed" Sep 29 11:55:16.698: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-a29515b9-d23a-4107-8b30-c44ea6fe8868 container client-container: STEP: delete the pod Sep 29 11:55:16.731: INFO: Waiting for pod downwardapi-volume-a29515b9-d23a-4107-8b30-c44ea6fe8868 to disappear Sep 29 11:55:16.747: INFO: Pod downwardapi-volume-a29515b9-d23a-4107-8b30-c44ea6fe8868 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:55:16.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4670" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":303,"completed":263,"skipped":4277,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:55:16.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-4b2f5c8f-f7c4-4e29-8dc4-8df635e853e2 STEP: Creating a pod to test consume secrets Sep 29 11:55:16.875: INFO: Waiting up to 5m0s for pod "pod-secrets-7edf7074-08af-416e-9bb8-7cdf109004a8" in namespace "secrets-6016" to be "Succeeded or Failed" Sep 29 11:55:16.892: INFO: Pod "pod-secrets-7edf7074-08af-416e-9bb8-7cdf109004a8": Phase="Pending", Reason="", readiness=false. Elapsed: 16.922842ms Sep 29 11:55:18.896: INFO: Pod "pod-secrets-7edf7074-08af-416e-9bb8-7cdf109004a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021470537s Sep 29 11:55:20.901: INFO: Pod "pod-secrets-7edf7074-08af-416e-9bb8-7cdf109004a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025833164s STEP: Saw pod success Sep 29 11:55:20.901: INFO: Pod "pod-secrets-7edf7074-08af-416e-9bb8-7cdf109004a8" satisfied condition "Succeeded or Failed" Sep 29 11:55:20.904: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-7edf7074-08af-416e-9bb8-7cdf109004a8 container secret-volume-test: STEP: delete the pod Sep 29 11:55:20.954: INFO: Waiting for pod pod-secrets-7edf7074-08af-416e-9bb8-7cdf109004a8 to disappear Sep 29 11:55:20.964: INFO: Pod pod-secrets-7edf7074-08af-416e-9bb8-7cdf109004a8 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:55:20.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6016" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":303,"completed":264,"skipped":4312,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:55:20.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on node default medium Sep 29 11:55:21.063: INFO: Waiting up to 5m0s for pod "pod-618e88ab-0286-475a-91d9-1028609a7fdf" in namespace "emptydir-3573" to be "Succeeded or Failed" Sep 29 11:55:21.079: INFO: Pod "pod-618e88ab-0286-475a-91d9-1028609a7fdf": Phase="Pending", Reason="", readiness=false. Elapsed: 15.604737ms Sep 29 11:55:23.083: INFO: Pod "pod-618e88ab-0286-475a-91d9-1028609a7fdf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02033704s Sep 29 11:55:25.088: INFO: Pod "pod-618e88ab-0286-475a-91d9-1028609a7fdf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024760669s STEP: Saw pod success Sep 29 11:55:25.088: INFO: Pod "pod-618e88ab-0286-475a-91d9-1028609a7fdf" satisfied condition "Succeeded or Failed" Sep 29 11:55:25.091: INFO: Trying to get logs from node kali-worker2 pod pod-618e88ab-0286-475a-91d9-1028609a7fdf container test-container: STEP: delete the pod Sep 29 11:55:25.109: INFO: Waiting for pod pod-618e88ab-0286-475a-91d9-1028609a7fdf to disappear Sep 29 11:55:25.126: INFO: Pod pod-618e88ab-0286-475a-91d9-1028609a7fdf no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:55:25.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3573" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":265,"skipped":4325,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:55:25.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 29 11:55:25.216: INFO: The status of Pod test-webserver-89ab23be-03e6-4fdd-ad01-ff2a6f0bf0d5 is Pending, waiting for it to be Running (with Ready = true) Sep 29 11:55:27.220: INFO: The status of Pod test-webserver-89ab23be-03e6-4fdd-ad01-ff2a6f0bf0d5 is Pending, waiting for it to be Running (with Ready = true) Sep 29 11:55:29.221: INFO: The status of Pod test-webserver-89ab23be-03e6-4fdd-ad01-ff2a6f0bf0d5 is Running (Ready = false) Sep 29 11:55:31.221: INFO: The status of Pod test-webserver-89ab23be-03e6-4fdd-ad01-ff2a6f0bf0d5 is Running (Ready = false) Sep 29 11:55:33.220: INFO: The status of Pod test-webserver-89ab23be-03e6-4fdd-ad01-ff2a6f0bf0d5 is Running (Ready = false) Sep 29 11:55:35.220: INFO: The status of Pod test-webserver-89ab23be-03e6-4fdd-ad01-ff2a6f0bf0d5 is Running (Ready = false) Sep 29 11:55:37.221: INFO: The status of Pod test-webserver-89ab23be-03e6-4fdd-ad01-ff2a6f0bf0d5 is Running (Ready = false) Sep 29 11:55:39.220: INFO: The status of Pod test-webserver-89ab23be-03e6-4fdd-ad01-ff2a6f0bf0d5 is Running (Ready = false) Sep 29 11:55:41.220: INFO: The status of Pod test-webserver-89ab23be-03e6-4fdd-ad01-ff2a6f0bf0d5 is Running (Ready = false) Sep 29 11:55:43.220: INFO: The status of Pod test-webserver-89ab23be-03e6-4fdd-ad01-ff2a6f0bf0d5 is Running (Ready = false) Sep 29 11:55:45.220: INFO: The status of Pod test-webserver-89ab23be-03e6-4fdd-ad01-ff2a6f0bf0d5 is Running (Ready = false) Sep 29 11:55:47.220: INFO: The status of Pod test-webserver-89ab23be-03e6-4fdd-ad01-ff2a6f0bf0d5 is Running (Ready = false) Sep 29 11:55:49.220: INFO: The status of Pod test-webserver-89ab23be-03e6-4fdd-ad01-ff2a6f0bf0d5 is Running (Ready = true) Sep 29 11:55:49.222: INFO: Container started at 2020-09-29 11:55:27 +0000 UTC, pod became ready at 2020-09-29 11:55:49 +0000 UTC [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:55:49.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9375" for this suite. • [SLOW TEST:24.074 seconds] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":303,"completed":266,"skipped":4337,"failed":0} [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:55:49.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating secret secrets-4831/secret-test-2ca1297f-d919-4e96-9c7d-43811101aa93 STEP: Creating a pod to test consume secrets Sep 29 11:55:49.308: INFO: Waiting up to 5m0s for pod "pod-configmaps-2375ba82-fce0-4fa1-a1b4-a62311421dfd" in namespace "secrets-4831" to be "Succeeded or Failed" Sep 29 11:55:49.334: INFO: Pod "pod-configmaps-2375ba82-fce0-4fa1-a1b4-a62311421dfd": Phase="Pending", Reason="", readiness=false. Elapsed: 25.802702ms Sep 29 11:55:51.339: INFO: Pod "pod-configmaps-2375ba82-fce0-4fa1-a1b4-a62311421dfd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030696495s Sep 29 11:55:53.343: INFO: Pod "pod-configmaps-2375ba82-fce0-4fa1-a1b4-a62311421dfd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03487718s STEP: Saw pod success Sep 29 11:55:53.343: INFO: Pod "pod-configmaps-2375ba82-fce0-4fa1-a1b4-a62311421dfd" satisfied condition "Succeeded or Failed" Sep 29 11:55:53.345: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-2375ba82-fce0-4fa1-a1b4-a62311421dfd container env-test: STEP: delete the pod Sep 29 11:55:53.361: INFO: Waiting for pod pod-configmaps-2375ba82-fce0-4fa1-a1b4-a62311421dfd to disappear Sep 29 11:55:53.366: INFO: Pod pod-configmaps-2375ba82-fce0-4fa1-a1b4-a62311421dfd no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:55:53.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4831" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":303,"completed":267,"skipped":4337,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:55:53.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Sep 29 11:55:53.477: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5404 /api/v1/namespaces/watch-5404/configmaps/e2e-watch-test-configmap-a e514b9d9-1c79-46e1-839e-7f5c173fa0d0 1620896 0 2020-09-29 11:55:53 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-09-29 11:55:53 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Sep 29 11:55:53.477: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5404 /api/v1/namespaces/watch-5404/configmaps/e2e-watch-test-configmap-a e514b9d9-1c79-46e1-839e-7f5c173fa0d0 1620896 0 2020-09-29 11:55:53 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-09-29 11:55:53 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Sep 29 11:56:03.485: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5404 /api/v1/namespaces/watch-5404/configmaps/e2e-watch-test-configmap-a e514b9d9-1c79-46e1-839e-7f5c173fa0d0 1620949 0 2020-09-29 11:55:53 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-09-29 11:56:03 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Sep 29 11:56:03.486: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5404 /api/v1/namespaces/watch-5404/configmaps/e2e-watch-test-configmap-a e514b9d9-1c79-46e1-839e-7f5c173fa0d0 1620949 0 2020-09-29 11:55:53 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-09-29 11:56:03 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Sep 29 11:56:13.495: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5404 /api/v1/namespaces/watch-5404/configmaps/e2e-watch-test-configmap-a e514b9d9-1c79-46e1-839e-7f5c173fa0d0 1620979 0 2020-09-29 11:55:53 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-09-29 11:56:13 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Sep 29 11:56:13.495: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5404 /api/v1/namespaces/watch-5404/configmaps/e2e-watch-test-configmap-a e514b9d9-1c79-46e1-839e-7f5c173fa0d0 1620979 0 2020-09-29 11:55:53 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-09-29 11:56:13 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Sep 29 11:56:23.503: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5404 /api/v1/namespaces/watch-5404/configmaps/e2e-watch-test-configmap-a e514b9d9-1c79-46e1-839e-7f5c173fa0d0 1621007 0 2020-09-29 11:55:53 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-09-29 11:56:13 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Sep 29 11:56:23.503: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5404 /api/v1/namespaces/watch-5404/configmaps/e2e-watch-test-configmap-a e514b9d9-1c79-46e1-839e-7f5c173fa0d0 1621007 0 2020-09-29 11:55:53 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-09-29 11:56:13 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Sep 29 11:56:33.514: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-5404 /api/v1/namespaces/watch-5404/configmaps/e2e-watch-test-configmap-b de8b6401-3d50-408f-a8d4-3cbee31a6aee 1621038 0 2020-09-29 11:56:33 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-09-29 11:56:33 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Sep 29 11:56:33.514: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-5404 /api/v1/namespaces/watch-5404/configmaps/e2e-watch-test-configmap-b de8b6401-3d50-408f-a8d4-3cbee31a6aee 1621038 0 2020-09-29 11:56:33 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-09-29 11:56:33 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Sep 29 11:56:43.521: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-5404 /api/v1/namespaces/watch-5404/configmaps/e2e-watch-test-configmap-b de8b6401-3d50-408f-a8d4-3cbee31a6aee 1621068 0 2020-09-29 11:56:33 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-09-29 11:56:33 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Sep 29 11:56:43.521: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-5404 /api/v1/namespaces/watch-5404/configmaps/e2e-watch-test-configmap-b de8b6401-3d50-408f-a8d4-3cbee31a6aee 1621068 0 2020-09-29 11:56:33 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-09-29 11:56:33 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:56:53.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5404" for this suite. • [SLOW TEST:60.160 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":303,"completed":268,"skipped":4349,"failed":0} SSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:56:53.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Sep 29 11:56:53.628: INFO: Waiting up to 5m0s for pod "downward-api-e1583312-4c5d-436b-bfe4-4c2534bb0e6e" in namespace "downward-api-6185" to be "Succeeded or Failed" Sep 29 11:56:53.642: INFO: Pod "downward-api-e1583312-4c5d-436b-bfe4-4c2534bb0e6e": Phase="Pending", Reason="", readiness=false. Elapsed: 13.563829ms Sep 29 11:56:55.645: INFO: Pod "downward-api-e1583312-4c5d-436b-bfe4-4c2534bb0e6e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017481205s Sep 29 11:56:57.651: INFO: Pod "downward-api-e1583312-4c5d-436b-bfe4-4c2534bb0e6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023233844s STEP: Saw pod success Sep 29 11:56:57.651: INFO: Pod "downward-api-e1583312-4c5d-436b-bfe4-4c2534bb0e6e" satisfied condition "Succeeded or Failed" Sep 29 11:56:57.654: INFO: Trying to get logs from node kali-worker2 pod downward-api-e1583312-4c5d-436b-bfe4-4c2534bb0e6e container dapi-container: STEP: delete the pod Sep 29 11:56:57.704: INFO: Waiting for pod downward-api-e1583312-4c5d-436b-bfe4-4c2534bb0e6e to disappear Sep 29 11:56:57.713: INFO: Pod downward-api-e1583312-4c5d-436b-bfe4-4c2534bb0e6e no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:56:57.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6185" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":303,"completed":269,"skipped":4352,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:56:57.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Sep 29 11:57:03.871: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-3799 PodName:pod-sharedvolume-e6919822-5d19-41fa-8989-413e6fa9f59c ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 29 11:57:03.871: INFO: >>> kubeConfig: /root/.kube/config I0929 11:57:03.922722 7 log.go:181] (0xc003ab0f20) (0xc005a539a0) Create stream I0929 11:57:03.922749 7 log.go:181] (0xc003ab0f20) (0xc005a539a0) Stream added, broadcasting: 1 I0929 11:57:03.926137 7 log.go:181] (0xc003ab0f20) Reply frame received for 1 I0929 11:57:03.926172 7 log.go:181] (0xc003ab0f20) (0xc0040600a0) Create stream I0929 11:57:03.926238 7 log.go:181] (0xc003ab0f20) (0xc0040600a0) Stream added, broadcasting: 3 I0929 11:57:03.927794 7 log.go:181] (0xc003ab0f20) Reply frame received for 3 I0929 11:57:03.927834 7 log.go:181] (0xc003ab0f20) (0xc005a53a40) Create stream I0929 11:57:03.927846 7 log.go:181] (0xc003ab0f20) (0xc005a53a40) Stream added, broadcasting: 5 I0929 11:57:03.928735 7 log.go:181] (0xc003ab0f20) Reply frame received for 5 I0929 11:57:03.989841 7 log.go:181] (0xc003ab0f20) Data frame received for 5 I0929 11:57:03.989898 7 log.go:181] (0xc005a53a40) (5) Data frame handling I0929 11:57:03.989948 7 log.go:181] (0xc003ab0f20) Data frame received for 3 I0929 11:57:03.989973 7 log.go:181] (0xc0040600a0) (3) Data frame handling I0929 11:57:03.989998 7 log.go:181] (0xc0040600a0) (3) Data frame sent I0929 11:57:03.990012 7 log.go:181] (0xc003ab0f20) Data frame received for 3 I0929 11:57:03.990024 7 log.go:181] (0xc0040600a0) (3) Data frame handling I0929 11:57:03.992225 7 log.go:181] (0xc003ab0f20) Data frame received for 1 I0929 11:57:03.992263 7 log.go:181] (0xc005a539a0) (1) Data frame handling I0929 11:57:03.992282 7 log.go:181] (0xc005a539a0) (1) Data frame sent I0929 11:57:03.992300 7 log.go:181] (0xc003ab0f20) (0xc005a539a0) Stream removed, broadcasting: 1 I0929 11:57:03.992371 7 log.go:181] (0xc003ab0f20) (0xc005a539a0) Stream removed, broadcasting: 1 I0929 11:57:03.992382 7 log.go:181] (0xc003ab0f20) (0xc0040600a0) Stream removed, broadcasting: 3 I0929 11:57:03.992388 7 log.go:181] (0xc003ab0f20) (0xc005a53a40) Stream removed, broadcasting: 5 Sep 29 11:57:03.992: INFO: Exec stderr: "" I0929 11:57:03.992419 7 log.go:181] (0xc003ab0f20) Go away received [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:57:03.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3799" for this suite. • [SLOW TEST:6.280 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 pod should support shared volumes between containers [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":303,"completed":270,"skipped":4388,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:57:04.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should run through the lifecycle of a ServiceAccount [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a ServiceAccount STEP: watching for the ServiceAccount to be added STEP: patching the ServiceAccount STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) STEP: deleting the ServiceAccount [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 11:57:04.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-4181" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":303,"completed":271,"skipped":4407,"failed":0} SSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 11:57:04.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-09f5f2f6-a6de-4200-ba70-e8259b6e69c5 in namespace container-probe-2822 Sep 29 11:57:08.334: INFO: Started pod busybox-09f5f2f6-a6de-4200-ba70-e8259b6e69c5 in namespace container-probe-2822 STEP: checking the pod's current state and verifying that restartCount is present Sep 29 11:57:08.336: INFO: Initial restart count of pod busybox-09f5f2f6-a6de-4200-ba70-e8259b6e69c5 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 12:01:09.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2822" for this suite. • [SLOW TEST:245.217 seconds] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":303,"completed":272,"skipped":4412,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 12:01:09.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 29 12:01:10.351: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 29 12:01:12.369: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736977670, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736977670, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736977670, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736977670, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 29 12:01:14.377: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736977670, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736977670, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736977670, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736977670, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 29 12:01:17.404: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 12:01:17.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7232" for this suite. STEP: Destroying namespace "webhook-7232-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.624 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":303,"completed":273,"skipped":4429,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 12:01:18.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 29 12:01:18.968: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 29 12:01:20.979: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736977679, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736977679, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736977679, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736977678, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 29 12:01:22.982: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736977679, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736977679, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736977679, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736977678, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 29 12:01:26.012: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 12:01:36.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7465" for this suite. STEP: Destroying namespace "webhook-7465-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.301 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":303,"completed":274,"skipped":4455,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 12:01:36.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-79e35160-e3f2-4b2d-b3ff-91219199724a STEP: Creating a pod to test consume secrets Sep 29 12:01:36.422: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ec1916a8-42bd-4c6a-b37e-603fe8d6d6d0" in namespace "projected-3515" to be "Succeeded or Failed" Sep 29 12:01:36.438: INFO: Pod "pod-projected-secrets-ec1916a8-42bd-4c6a-b37e-603fe8d6d6d0": Phase="Pending", Reason="", readiness=false. Elapsed: 16.132244ms Sep 29 12:01:38.442: INFO: Pod "pod-projected-secrets-ec1916a8-42bd-4c6a-b37e-603fe8d6d6d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020235401s Sep 29 12:01:40.445: INFO: Pod "pod-projected-secrets-ec1916a8-42bd-4c6a-b37e-603fe8d6d6d0": Phase="Running", Reason="", readiness=true. Elapsed: 4.023518513s Sep 29 12:01:42.449: INFO: Pod "pod-projected-secrets-ec1916a8-42bd-4c6a-b37e-603fe8d6d6d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.027042561s STEP: Saw pod success Sep 29 12:01:42.449: INFO: Pod "pod-projected-secrets-ec1916a8-42bd-4c6a-b37e-603fe8d6d6d0" satisfied condition "Succeeded or Failed" Sep 29 12:01:42.452: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-ec1916a8-42bd-4c6a-b37e-603fe8d6d6d0 container projected-secret-volume-test: STEP: delete the pod Sep 29 12:01:42.506: INFO: Waiting for pod pod-projected-secrets-ec1916a8-42bd-4c6a-b37e-603fe8d6d6d0 to disappear Sep 29 12:01:42.521: INFO: Pod pod-projected-secrets-ec1916a8-42bd-4c6a-b37e-603fe8d6d6d0 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 12:01:42.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3515" for this suite. • [SLOW TEST:6.179 seconds] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":275,"skipped":4463,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 12:01:42.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 29 12:01:42.599: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ea42d685-589f-4077-9212-d8f81d08f201" in namespace "downward-api-1871" to be "Succeeded or Failed" Sep 29 12:01:42.616: INFO: Pod "downwardapi-volume-ea42d685-589f-4077-9212-d8f81d08f201": Phase="Pending", Reason="", readiness=false. Elapsed: 16.405617ms Sep 29 12:01:44.621: INFO: Pod "downwardapi-volume-ea42d685-589f-4077-9212-d8f81d08f201": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022081794s Sep 29 12:01:46.626: INFO: Pod "downwardapi-volume-ea42d685-589f-4077-9212-d8f81d08f201": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027279251s STEP: Saw pod success Sep 29 12:01:46.626: INFO: Pod "downwardapi-volume-ea42d685-589f-4077-9212-d8f81d08f201" satisfied condition "Succeeded or Failed" Sep 29 12:01:46.629: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-ea42d685-589f-4077-9212-d8f81d08f201 container client-container: STEP: delete the pod Sep 29 12:01:46.716: INFO: Waiting for pod downwardapi-volume-ea42d685-589f-4077-9212-d8f81d08f201 to disappear Sep 29 12:01:46.731: INFO: Pod downwardapi-volume-ea42d685-589f-4077-9212-d8f81d08f201 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 12:01:46.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1871" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":303,"completed":276,"skipped":4465,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 12:01:46.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 29 12:01:46.789: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 12:01:50.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7604" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":303,"completed":277,"skipped":4483,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 12:01:50.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-4640 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-4640 STEP: creating replication controller externalsvc in namespace services-4640 I0929 12:01:51.057092 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-4640, replica count: 2 I0929 12:01:54.107571 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0929 12:01:57.107846 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Sep 29 12:01:57.158: INFO: Creating new exec pod Sep 29 12:02:01.223: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-4640 execpodlk2p4 -- /bin/sh -x -c nslookup clusterip-service.services-4640.svc.cluster.local' Sep 29 12:02:01.466: INFO: stderr: "I0929 12:02:01.351710 3365 log.go:181] (0xc00003a420) (0xc000ea8000) Create stream\nI0929 12:02:01.351770 3365 log.go:181] (0xc00003a420) (0xc000ea8000) Stream added, broadcasting: 1\nI0929 12:02:01.354736 3365 log.go:181] (0xc00003a420) Reply frame received for 1\nI0929 12:02:01.354797 3365 log.go:181] (0xc00003a420) (0xc0006988c0) Create stream\nI0929 12:02:01.354817 3365 log.go:181] (0xc00003a420) (0xc0006988c0) Stream added, broadcasting: 3\nI0929 12:02:01.357043 3365 log.go:181] (0xc00003a420) Reply frame received for 3\nI0929 12:02:01.357093 3365 log.go:181] (0xc00003a420) (0xc000699400) Create stream\nI0929 12:02:01.357118 3365 log.go:181] (0xc00003a420) (0xc000699400) Stream added, broadcasting: 5\nI0929 12:02:01.357890 3365 log.go:181] (0xc00003a420) Reply frame received for 5\nI0929 12:02:01.448941 3365 log.go:181] (0xc00003a420) Data frame received for 5\nI0929 12:02:01.448977 3365 log.go:181] (0xc000699400) (5) Data frame handling\nI0929 12:02:01.448999 3365 log.go:181] (0xc000699400) (5) Data frame sent\n+ nslookup clusterip-service.services-4640.svc.cluster.local\nI0929 12:02:01.455754 3365 log.go:181] (0xc00003a420) Data frame received for 3\nI0929 12:02:01.455785 3365 log.go:181] (0xc0006988c0) (3) Data frame handling\nI0929 12:02:01.455803 3365 log.go:181] (0xc0006988c0) (3) Data frame sent\nI0929 12:02:01.457223 3365 log.go:181] (0xc00003a420) Data frame received for 3\nI0929 12:02:01.457256 3365 log.go:181] (0xc0006988c0) (3) Data frame handling\nI0929 12:02:01.457289 3365 log.go:181] (0xc0006988c0) (3) Data frame sent\nI0929 12:02:01.457805 3365 log.go:181] (0xc00003a420) Data frame received for 5\nI0929 12:02:01.457830 3365 log.go:181] (0xc000699400) (5) Data frame handling\nI0929 12:02:01.457945 3365 log.go:181] (0xc00003a420) Data frame received for 3\nI0929 12:02:01.457975 3365 log.go:181] (0xc0006988c0) (3) Data frame handling\nI0929 12:02:01.460352 3365 log.go:181] (0xc00003a420) Data frame received for 1\nI0929 12:02:01.460377 3365 log.go:181] (0xc000ea8000) (1) Data frame handling\nI0929 12:02:01.460398 3365 log.go:181] (0xc000ea8000) (1) Data frame sent\nI0929 12:02:01.460431 3365 log.go:181] (0xc00003a420) (0xc000ea8000) Stream removed, broadcasting: 1\nI0929 12:02:01.460503 3365 log.go:181] (0xc00003a420) Go away received\nI0929 12:02:01.460957 3365 log.go:181] (0xc00003a420) (0xc000ea8000) Stream removed, broadcasting: 1\nI0929 12:02:01.460983 3365 log.go:181] (0xc00003a420) (0xc0006988c0) Stream removed, broadcasting: 3\nI0929 12:02:01.460999 3365 log.go:181] (0xc00003a420) (0xc000699400) Stream removed, broadcasting: 5\n" Sep 29 12:02:01.466: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-4640.svc.cluster.local\tcanonical name = externalsvc.services-4640.svc.cluster.local.\nName:\texternalsvc.services-4640.svc.cluster.local\nAddress: 10.105.192.246\n\n" STEP: deleting ReplicationController externalsvc in namespace services-4640, will wait for the garbage collector to delete the pods Sep 29 12:02:01.527: INFO: Deleting ReplicationController externalsvc took: 6.893744ms Sep 29 12:02:01.627: INFO: Terminating ReplicationController externalsvc pods took: 100.134061ms Sep 29 12:02:08.791: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 12:02:08.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4640" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:17.950 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":303,"completed":278,"skipped":4526,"failed":0} SS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 12:02:08.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 29 12:02:08.878: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-bef92844-31c4-4961-91df-4da77ed0db2f" in namespace "security-context-test-1837" to be "Succeeded or Failed" Sep 29 12:02:08.951: INFO: Pod "busybox-privileged-false-bef92844-31c4-4961-91df-4da77ed0db2f": Phase="Pending", Reason="", readiness=false. Elapsed: 72.634331ms Sep 29 12:02:10.955: INFO: Pod "busybox-privileged-false-bef92844-31c4-4961-91df-4da77ed0db2f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077115839s Sep 29 12:02:13.028: INFO: Pod "busybox-privileged-false-bef92844-31c4-4961-91df-4da77ed0db2f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.150478548s Sep 29 12:02:15.032: INFO: Pod "busybox-privileged-false-bef92844-31c4-4961-91df-4da77ed0db2f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.154194665s Sep 29 12:02:15.032: INFO: Pod "busybox-privileged-false-bef92844-31c4-4961-91df-4da77ed0db2f" satisfied condition "Succeeded or Failed" Sep 29 12:02:15.038: INFO: Got logs for pod "busybox-privileged-false-bef92844-31c4-4961-91df-4da77ed0db2f": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 12:02:15.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1837" for this suite. • [SLOW TEST:6.224 seconds] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a pod with privileged /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:227 should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":279,"skipped":4528,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 12:02:15.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0929 12:02:16.220539 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Sep 29 12:03:18.240: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 12:03:18.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4455" for this suite. • [SLOW TEST:63.201 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":303,"completed":280,"skipped":4536,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 12:03:18.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Sep 29 12:03:25.421: INFO: 10 pods remaining Sep 29 12:03:25.421: INFO: 0 pods has nil DeletionTimestamp Sep 29 12:03:25.421: INFO: Sep 29 12:03:26.574: INFO: 0 pods remaining Sep 29 12:03:26.575: INFO: 0 pods has nil DeletionTimestamp Sep 29 12:03:26.575: INFO: Sep 29 12:03:27.215: INFO: 0 pods remaining Sep 29 12:03:27.215: INFO: 0 pods has nil DeletionTimestamp Sep 29 12:03:27.215: INFO: STEP: Gathering metrics W0929 12:03:28.236771 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Sep 29 12:04:30.573: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 12:04:30.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2598" for this suite. • [SLOW TEST:72.332 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":303,"completed":281,"skipped":4550,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 12:04:30.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 29 12:04:30.711: INFO: Waiting up to 5m0s for pod "downwardapi-volume-55905aa3-3fe1-45dc-bfb2-5ced220afc08" in namespace "downward-api-5343" to be "Succeeded or Failed" Sep 29 12:04:30.724: INFO: Pod "downwardapi-volume-55905aa3-3fe1-45dc-bfb2-5ced220afc08": Phase="Pending", Reason="", readiness=false. Elapsed: 13.434887ms Sep 29 12:04:32.809: INFO: Pod "downwardapi-volume-55905aa3-3fe1-45dc-bfb2-5ced220afc08": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097631446s Sep 29 12:04:34.813: INFO: Pod "downwardapi-volume-55905aa3-3fe1-45dc-bfb2-5ced220afc08": Phase="Running", Reason="", readiness=true. Elapsed: 4.102481775s Sep 29 12:04:36.818: INFO: Pod "downwardapi-volume-55905aa3-3fe1-45dc-bfb2-5ced220afc08": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.107089744s STEP: Saw pod success Sep 29 12:04:36.818: INFO: Pod "downwardapi-volume-55905aa3-3fe1-45dc-bfb2-5ced220afc08" satisfied condition "Succeeded or Failed" Sep 29 12:04:36.821: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-55905aa3-3fe1-45dc-bfb2-5ced220afc08 container client-container: STEP: delete the pod Sep 29 12:04:36.863: INFO: Waiting for pod downwardapi-volume-55905aa3-3fe1-45dc-bfb2-5ced220afc08 to disappear Sep 29 12:04:36.872: INFO: Pod downwardapi-volume-55905aa3-3fe1-45dc-bfb2-5ced220afc08 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 12:04:36.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5343" for this suite. • [SLOW TEST:6.298 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":303,"completed":282,"skipped":4564,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 12:04:36.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-484 STEP: creating a selector STEP: Creating the service pods in kubernetes Sep 29 12:04:36.956: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Sep 29 12:04:37.257: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Sep 29 12:04:39.335: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Sep 29 12:04:41.261: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 29 12:04:43.262: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 29 12:04:45.261: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 29 12:04:47.261: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 29 12:04:49.261: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 29 12:04:51.282: INFO: The status of Pod netserver-0 is Running (Ready = false) Sep 29 12:04:53.261: INFO: The status of Pod netserver-0 is Running (Ready = true) Sep 29 12:04:53.267: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Sep 29 12:04:57.313: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.191:8080/dial?request=hostname&protocol=http&host=10.244.2.186&port=8080&tries=1'] Namespace:pod-network-test-484 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 29 12:04:57.313: INFO: >>> kubeConfig: /root/.kube/config I0929 12:04:57.354073 7 log.go:181] (0xc000914840) (0xc003e32b40) Create stream I0929 12:04:57.354096 7 log.go:181] (0xc000914840) (0xc003e32b40) Stream added, broadcasting: 1 I0929 12:04:57.358440 7 log.go:181] (0xc000914840) Reply frame received for 1 I0929 12:04:57.358477 7 log.go:181] (0xc000914840) (0xc00646b360) Create stream I0929 12:04:57.358491 7 log.go:181] (0xc000914840) (0xc00646b360) Stream added, broadcasting: 3 I0929 12:04:57.359612 7 log.go:181] (0xc000914840) Reply frame received for 3 I0929 12:04:57.359652 7 log.go:181] (0xc000914840) (0xc0021455e0) Create stream I0929 12:04:57.359667 7 log.go:181] (0xc000914840) (0xc0021455e0) Stream added, broadcasting: 5 I0929 12:04:57.360986 7 log.go:181] (0xc000914840) Reply frame received for 5 I0929 12:04:57.436993 7 log.go:181] (0xc000914840) Data frame received for 3 I0929 12:04:57.437026 7 log.go:181] (0xc00646b360) (3) Data frame handling I0929 12:04:57.437057 7 log.go:181] (0xc00646b360) (3) Data frame sent I0929 12:04:57.437257 7 log.go:181] (0xc000914840) Data frame received for 3 I0929 12:04:57.437282 7 log.go:181] (0xc00646b360) (3) Data frame handling I0929 12:04:57.437506 7 log.go:181] (0xc000914840) Data frame received for 5 I0929 12:04:57.437523 7 log.go:181] (0xc0021455e0) (5) Data frame handling I0929 12:04:57.438973 7 log.go:181] (0xc000914840) Data frame received for 1 I0929 12:04:57.439000 7 log.go:181] (0xc003e32b40) (1) Data frame handling I0929 12:04:57.439014 7 log.go:181] (0xc003e32b40) (1) Data frame sent I0929 12:04:57.439029 7 log.go:181] (0xc000914840) (0xc003e32b40) Stream removed, broadcasting: 1 I0929 12:04:57.439043 7 log.go:181] (0xc000914840) Go away received I0929 12:04:57.439161 7 log.go:181] (0xc000914840) (0xc003e32b40) Stream removed, broadcasting: 1 I0929 12:04:57.439193 7 log.go:181] (0xc000914840) (0xc00646b360) Stream removed, broadcasting: 3 I0929 12:04:57.439214 7 log.go:181] (0xc000914840) (0xc0021455e0) Stream removed, broadcasting: 5 Sep 29 12:04:57.439: INFO: Waiting for responses: map[] Sep 29 12:04:57.442: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.191:8080/dial?request=hostname&protocol=http&host=10.244.1.190&port=8080&tries=1'] Namespace:pod-network-test-484 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Sep 29 12:04:57.442: INFO: >>> kubeConfig: /root/.kube/config I0929 12:04:57.472720 7 log.go:181] (0xc003ab0370) (0xc00646b5e0) Create stream I0929 12:04:57.472750 7 log.go:181] (0xc003ab0370) (0xc00646b5e0) Stream added, broadcasting: 1 I0929 12:04:57.475920 7 log.go:181] (0xc003ab0370) Reply frame received for 1 I0929 12:04:57.475980 7 log.go:181] (0xc003ab0370) (0xc00197c000) Create stream I0929 12:04:57.476006 7 log.go:181] (0xc003ab0370) (0xc00197c000) Stream added, broadcasting: 3 I0929 12:04:57.477280 7 log.go:181] (0xc003ab0370) Reply frame received for 3 I0929 12:04:57.477330 7 log.go:181] (0xc003ab0370) (0xc003e32d20) Create stream I0929 12:04:57.477347 7 log.go:181] (0xc003ab0370) (0xc003e32d20) Stream added, broadcasting: 5 I0929 12:04:57.478473 7 log.go:181] (0xc003ab0370) Reply frame received for 5 I0929 12:04:57.544447 7 log.go:181] (0xc003ab0370) Data frame received for 3 I0929 12:04:57.544488 7 log.go:181] (0xc00197c000) (3) Data frame handling I0929 12:04:57.544529 7 log.go:181] (0xc00197c000) (3) Data frame sent I0929 12:04:57.544750 7 log.go:181] (0xc003ab0370) Data frame received for 3 I0929 12:04:57.544791 7 log.go:181] (0xc00197c000) (3) Data frame handling I0929 12:04:57.544819 7 log.go:181] (0xc003ab0370) Data frame received for 5 I0929 12:04:57.544928 7 log.go:181] (0xc003e32d20) (5) Data frame handling I0929 12:04:57.546591 7 log.go:181] (0xc003ab0370) Data frame received for 1 I0929 12:04:57.546614 7 log.go:181] (0xc00646b5e0) (1) Data frame handling I0929 12:04:57.546629 7 log.go:181] (0xc00646b5e0) (1) Data frame sent I0929 12:04:57.546644 7 log.go:181] (0xc003ab0370) (0xc00646b5e0) Stream removed, broadcasting: 1 I0929 12:04:57.546716 7 log.go:181] (0xc003ab0370) Go away received I0929 12:04:57.546787 7 log.go:181] (0xc003ab0370) (0xc00646b5e0) Stream removed, broadcasting: 1 I0929 12:04:57.546806 7 log.go:181] (0xc003ab0370) (0xc00197c000) Stream removed, broadcasting: 3 I0929 12:04:57.546827 7 log.go:181] (0xc003ab0370) (0xc003e32d20) Stream removed, broadcasting: 5 Sep 29 12:04:57.546: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 12:04:57.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-484" for this suite. • [SLOW TEST:20.675 seconds] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":303,"completed":283,"skipped":4623,"failed":0} SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 12:04:57.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 29 12:04:57.701: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0f971fb3-58f7-4f1f-8c0e-cc1a519ce7f3" in namespace "projected-1219" to be "Succeeded or Failed" Sep 29 12:04:57.704: INFO: Pod "downwardapi-volume-0f971fb3-58f7-4f1f-8c0e-cc1a519ce7f3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.552743ms Sep 29 12:04:59.713: INFO: Pod "downwardapi-volume-0f971fb3-58f7-4f1f-8c0e-cc1a519ce7f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012373031s Sep 29 12:05:01.719: INFO: Pod "downwardapi-volume-0f971fb3-58f7-4f1f-8c0e-cc1a519ce7f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017921055s STEP: Saw pod success Sep 29 12:05:01.719: INFO: Pod "downwardapi-volume-0f971fb3-58f7-4f1f-8c0e-cc1a519ce7f3" satisfied condition "Succeeded or Failed" Sep 29 12:05:01.722: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-0f971fb3-58f7-4f1f-8c0e-cc1a519ce7f3 container client-container: STEP: delete the pod Sep 29 12:05:01.769: INFO: Waiting for pod downwardapi-volume-0f971fb3-58f7-4f1f-8c0e-cc1a519ce7f3 to disappear Sep 29 12:05:01.783: INFO: Pod downwardapi-volume-0f971fb3-58f7-4f1f-8c0e-cc1a519ce7f3 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 12:05:01.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1219" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":284,"skipped":4628,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 12:05:01.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Sep 29 12:05:02.354: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Sep 29 12:05:04.755: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736977902, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736977902, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736977902, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736977902, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-85d57b96d6\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 29 12:05:07.852: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 29 12:05:07.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 12:05:08.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-243" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:7.223 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":303,"completed":285,"skipped":4751,"failed":0} SSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 12:05:09.021: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 29 12:05:09.094: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Sep 29 12:05:09.121: INFO: Pod name sample-pod: Found 0 pods out of 1 Sep 29 12:05:14.126: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Sep 29 12:05:14.126: INFO: Creating deployment "test-rolling-update-deployment" Sep 29 12:05:14.136: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Sep 29 12:05:14.161: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Sep 29 12:05:16.169: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Sep 29 12:05:16.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736977914, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736977914, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736977914, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736977914, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-c4cb8d6d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Sep 29 12:05:18.175: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Sep 29 12:05:18.183: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-1461 /apis/apps/v1/namespaces/deployment-1461/deployments/test-rolling-update-deployment f3497c04-d085-4cff-958a-6ce9a6205a46 1623445 1 2020-09-29 12:05:14 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2020-09-29 12:05:14 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-09-29 12:05:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00321a598 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-09-29 12:05:14 +0000 UTC,LastTransitionTime:2020-09-29 12:05:14 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-c4cb8d6d9" has successfully progressed.,LastUpdateTime:2020-09-29 12:05:17 +0000 UTC,LastTransitionTime:2020-09-29 12:05:14 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Sep 29 12:05:18.187: INFO: New ReplicaSet "test-rolling-update-deployment-c4cb8d6d9" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-c4cb8d6d9 deployment-1461 /apis/apps/v1/namespaces/deployment-1461/replicasets/test-rolling-update-deployment-c4cb8d6d9 0d7a935c-ba65-433e-abdd-5e447fe4534b 1623433 1 2020-09-29 12:05:14 +0000 UTC map[name:sample-pod pod-template-hash:c4cb8d6d9] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment f3497c04-d085-4cff-958a-6ce9a6205a46 0xc005cb02d0 0xc005cb02d1}] [] [{kube-controller-manager Update apps/v1 2020-09-29 12:05:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3497c04-d085-4cff-958a-6ce9a6205a46\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: c4cb8d6d9,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:c4cb8d6d9] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005cb0348 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Sep 29 12:05:18.187: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Sep 29 12:05:18.187: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-1461 /apis/apps/v1/namespaces/deployment-1461/replicasets/test-rolling-update-controller 49c486eb-e31a-44dd-81c4-1772598a4600 1623444 2 2020-09-29 12:05:09 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment f3497c04-d085-4cff-958a-6ce9a6205a46 0xc005cb01af 0xc005cb01c0}] [] [{e2e.test Update apps/v1 2020-09-29 12:05:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-09-29 12:05:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f3497c04-d085-4cff-958a-6ce9a6205a46\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc005cb0268 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Sep 29 12:05:18.190: INFO: Pod "test-rolling-update-deployment-c4cb8d6d9-mtp54" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-c4cb8d6d9-mtp54 test-rolling-update-deployment-c4cb8d6d9- deployment-1461 /api/v1/namespaces/deployment-1461/pods/test-rolling-update-deployment-c4cb8d6d9-mtp54 21c22574-8a7f-4f13-8db4-b9c102cad11b 1623432 0 2020-09-29 12:05:14 +0000 UTC map[name:sample-pod pod-template-hash:c4cb8d6d9] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-c4cb8d6d9 0d7a935c-ba65-433e-abdd-5e447fe4534b 0xc005cb0840 0xc005cb0841}] [] [{kube-controller-manager Update v1 2020-09-29 12:05:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0d7a935c-ba65-433e-abdd-5e447fe4534b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-29 12:05:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.189\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-77kqz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-77kqz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-77kqz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:05:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:05:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:05:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:05:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.2.189,StartTime:2020-09-29 12:05:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-29 12:05:16 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://74616ab678ade93348760c5eb8900a156cca20ae660eeff0f1e77743cebcb486,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.189,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 12:05:18.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1461" for this suite. • [SLOW TEST:9.178 seconds] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":303,"completed":286,"skipped":4758,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 12:05:18.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium Sep 29 12:05:18.302: INFO: Waiting up to 5m0s for pod "pod-1e08fe25-6b82-48bb-a541-e17971608c6e" in namespace "emptydir-3370" to be "Succeeded or Failed" Sep 29 12:05:18.313: INFO: Pod "pod-1e08fe25-6b82-48bb-a541-e17971608c6e": Phase="Pending", Reason="", readiness=false. Elapsed: 11.826174ms Sep 29 12:05:20.317: INFO: Pod "pod-1e08fe25-6b82-48bb-a541-e17971608c6e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015679316s Sep 29 12:05:22.322: INFO: Pod "pod-1e08fe25-6b82-48bb-a541-e17971608c6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020298559s STEP: Saw pod success Sep 29 12:05:22.322: INFO: Pod "pod-1e08fe25-6b82-48bb-a541-e17971608c6e" satisfied condition "Succeeded or Failed" Sep 29 12:05:22.325: INFO: Trying to get logs from node kali-worker pod pod-1e08fe25-6b82-48bb-a541-e17971608c6e container test-container: STEP: delete the pod Sep 29 12:05:22.375: INFO: Waiting for pod pod-1e08fe25-6b82-48bb-a541-e17971608c6e to disappear Sep 29 12:05:22.539: INFO: Pod pod-1e08fe25-6b82-48bb-a541-e17971608c6e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 12:05:22.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3370" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":287,"skipped":4759,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 12:05:22.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 29 12:05:22.674: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 12:05:29.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9802" for this suite. • [SLOW TEST:6.533 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":303,"completed":288,"skipped":4766,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 12:05:29.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 29 12:05:29.153: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b2d85c99-d446-44f4-abc5-cc4374809602" in namespace "projected-1025" to be "Succeeded or Failed" Sep 29 12:05:29.157: INFO: Pod "downwardapi-volume-b2d85c99-d446-44f4-abc5-cc4374809602": Phase="Pending", Reason="", readiness=false. Elapsed: 4.193146ms Sep 29 12:05:31.188: INFO: Pod "downwardapi-volume-b2d85c99-d446-44f4-abc5-cc4374809602": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034721832s Sep 29 12:05:33.264: INFO: Pod "downwardapi-volume-b2d85c99-d446-44f4-abc5-cc4374809602": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.110926136s STEP: Saw pod success Sep 29 12:05:33.264: INFO: Pod "downwardapi-volume-b2d85c99-d446-44f4-abc5-cc4374809602" satisfied condition "Succeeded or Failed" Sep 29 12:05:33.267: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-b2d85c99-d446-44f4-abc5-cc4374809602 container client-container: STEP: delete the pod Sep 29 12:05:33.434: INFO: Waiting for pod downwardapi-volume-b2d85c99-d446-44f4-abc5-cc4374809602 to disappear Sep 29 12:05:33.442: INFO: Pod downwardapi-volume-b2d85c99-d446-44f4-abc5-cc4374809602 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 12:05:33.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1025" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":289,"skipped":4785,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 12:05:33.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Sep 29 12:05:33.564: INFO: >>> kubeConfig: /root/.kube/config Sep 29 12:05:36.518: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 12:05:47.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5219" for this suite. • [SLOW TEST:13.886 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":303,"completed":290,"skipped":4797,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 12:05:47.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Sep 29 12:05:47.819: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Sep 29 12:05:49.832: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736977947, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736977947, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63736977947, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63736977947, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Sep 29 12:05:52.875: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 12:05:53.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2413" for this suite. STEP: Destroying namespace "webhook-2413-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.932 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":303,"completed":291,"skipped":4824,"failed":0} SSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 12:05:53.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod test-webserver-331dbd4b-adba-45ae-81cc-6bd58810e7ce in namespace container-probe-1695 Sep 29 12:05:57.381: INFO: Started pod test-webserver-331dbd4b-adba-45ae-81cc-6bd58810e7ce in namespace container-probe-1695 STEP: checking the pod's current state and verifying that restartCount is present Sep 29 12:05:57.383: INFO: Initial restart count of pod test-webserver-331dbd4b-adba-45ae-81cc-6bd58810e7ce is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 12:09:58.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1695" for this suite. • [SLOW TEST:244.893 seconds] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":303,"completed":292,"skipped":4829,"failed":0} S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 12:09:58.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-bf795eaf-48af-4cde-9914-8a0e2c3722c7 STEP: Creating a pod to test consume secrets Sep 29 12:09:58.520: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ccd1d7a2-3491-46e0-bc26-cff0f193b3c2" in namespace "projected-429" to be "Succeeded or Failed" Sep 29 12:09:58.524: INFO: Pod "pod-projected-secrets-ccd1d7a2-3491-46e0-bc26-cff0f193b3c2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.554799ms Sep 29 12:10:00.698: INFO: Pod "pod-projected-secrets-ccd1d7a2-3491-46e0-bc26-cff0f193b3c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.177924898s Sep 29 12:10:02.704: INFO: Pod "pod-projected-secrets-ccd1d7a2-3491-46e0-bc26-cff0f193b3c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.183694905s STEP: Saw pod success Sep 29 12:10:02.704: INFO: Pod "pod-projected-secrets-ccd1d7a2-3491-46e0-bc26-cff0f193b3c2" satisfied condition "Succeeded or Failed" Sep 29 12:10:02.707: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-ccd1d7a2-3491-46e0-bc26-cff0f193b3c2 container projected-secret-volume-test: STEP: delete the pod Sep 29 12:10:02.757: INFO: Waiting for pod pod-projected-secrets-ccd1d7a2-3491-46e0-bc26-cff0f193b3c2 to disappear Sep 29 12:10:02.767: INFO: Pod pod-projected-secrets-ccd1d7a2-3491-46e0-bc26-cff0f193b3c2 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 12:10:02.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-429" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":293,"skipped":4830,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 12:10:02.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 29 12:10:03.052: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1240d262-eaf6-46c7-93de-33b18b1b61fb" in namespace "downward-api-947" to be "Succeeded or Failed" Sep 29 12:10:03.087: INFO: Pod "downwardapi-volume-1240d262-eaf6-46c7-93de-33b18b1b61fb": Phase="Pending", Reason="", readiness=false. Elapsed: 35.343436ms Sep 29 12:10:05.090: INFO: Pod "downwardapi-volume-1240d262-eaf6-46c7-93de-33b18b1b61fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038782889s Sep 29 12:10:07.141: INFO: Pod "downwardapi-volume-1240d262-eaf6-46c7-93de-33b18b1b61fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.089250218s STEP: Saw pod success Sep 29 12:10:07.141: INFO: Pod "downwardapi-volume-1240d262-eaf6-46c7-93de-33b18b1b61fb" satisfied condition "Succeeded or Failed" Sep 29 12:10:07.144: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-1240d262-eaf6-46c7-93de-33b18b1b61fb container client-container: STEP: delete the pod Sep 29 12:10:07.231: INFO: Waiting for pod downwardapi-volume-1240d262-eaf6-46c7-93de-33b18b1b61fb to disappear Sep 29 12:10:07.261: INFO: Pod downwardapi-volume-1240d262-eaf6-46c7-93de-33b18b1b61fb no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 12:10:07.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-947" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":303,"completed":294,"skipped":4842,"failed":0} SSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 12:10:07.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] deployment should support proportional scaling [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 29 12:10:07.354: INFO: Creating deployment "webserver-deployment" Sep 29 12:10:07.359: INFO: Waiting for observed generation 1 Sep 29 12:10:09.393: INFO: Waiting for all required pods to come up Sep 29 12:10:09.398: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Sep 29 12:10:19.487: INFO: Waiting for deployment "webserver-deployment" to complete Sep 29 12:10:19.493: INFO: Updating deployment "webserver-deployment" with a non-existent image Sep 29 12:10:19.500: INFO: Updating deployment webserver-deployment Sep 29 12:10:19.500: INFO: Waiting for observed generation 2 Sep 29 12:10:21.522: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Sep 29 12:10:21.525: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Sep 29 12:10:21.527: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Sep 29 12:10:21.532: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Sep 29 12:10:21.532: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Sep 29 12:10:21.534: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Sep 29 12:10:21.537: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Sep 29 12:10:21.537: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Sep 29 12:10:21.541: INFO: Updating deployment webserver-deployment Sep 29 12:10:21.541: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Sep 29 12:10:21.657: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Sep 29 12:10:22.016: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Sep 29 12:10:22.388: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-9970 /apis/apps/v1/namespaces/deployment-9970/deployments/webserver-deployment 5ec83476-3ea5-42f9-8f1b-aa18934b9317 1624814 3 2020-09-29 12:10:07 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-09-29 12:10:21 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-09-29 12:10:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002b04758 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-795d758f88" is progressing.,LastUpdateTime:2020-09-29 12:10:20 +0000 UTC,LastTransitionTime:2020-09-29 12:10:07 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-09-29 12:10:21 +0000 UTC,LastTransitionTime:2020-09-29 12:10:21 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Sep 29 12:10:22.503: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88 deployment-9970 /apis/apps/v1/namespaces/deployment-9970/replicasets/webserver-deployment-795d758f88 f127b8a2-9da7-448b-9298-c31b63251aea 1624862 3 2020-09-29 12:10:19 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 5ec83476-3ea5-42f9-8f1b-aa18934b9317 0xc002b04bd7 0xc002b04bd8}] [] [{kube-controller-manager Update apps/v1 2020-09-29 12:10:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ec83476-3ea5-42f9-8f1b-aa18934b9317\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002b04c58 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Sep 29 12:10:22.503: INFO: All old ReplicaSets of Deployment "webserver-deployment": Sep 29 12:10:22.503: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-dd94f59b7 deployment-9970 /apis/apps/v1/namespaces/deployment-9970/replicasets/webserver-deployment-dd94f59b7 fb62a52a-ccf7-4d22-899a-07e3346d14b2 1624860 3 2020-09-29 12:10:07 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 5ec83476-3ea5-42f9-8f1b-aa18934b9317 0xc002b04cb7 0xc002b04cb8}] [] [{kube-controller-manager Update apps/v1 2020-09-29 12:10:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ec83476-3ea5-42f9-8f1b-aa18934b9317\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: dd94f59b7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002b04d28 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Sep 29 12:10:22.587: INFO: Pod "webserver-deployment-795d758f88-2zs9x" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-2zs9x webserver-deployment-795d758f88- deployment-9970 /api/v1/namespaces/deployment-9970/pods/webserver-deployment-795d758f88-2zs9x 76ed6ccd-15b1-4f66-89b0-c44912a9a076 1624769 0 2020-09-29 12:10:19 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 f127b8a2-9da7-448b-9298-c31b63251aea 0xc002b05247 0xc002b05248}] [] [{kube-controller-manager Update v1 2020-09-29 12:10:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f127b8a2-9da7-448b-9298-c31b63251aea\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-29 12:10:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghkqt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghkqt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghkqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-09-29 12:10:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 29 12:10:22.587: INFO: Pod "webserver-deployment-795d758f88-7lxgg" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-7lxgg webserver-deployment-795d758f88- deployment-9970 /api/v1/namespaces/deployment-9970/pods/webserver-deployment-795d758f88-7lxgg fcbac410-ee7d-4c53-944d-6f9288089356 1624781 0 2020-09-29 12:10:19 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 f127b8a2-9da7-448b-9298-c31b63251aea 0xc002b053f7 0xc002b053f8}] [] [{kube-controller-manager Update v1 2020-09-29 12:10:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f127b8a2-9da7-448b-9298-c31b63251aea\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-29 12:10:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghkqt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghkqt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghkqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-09-29 12:10:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 29 12:10:22.588: INFO: Pod "webserver-deployment-795d758f88-87kr7" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-87kr7 webserver-deployment-795d758f88- deployment-9970 /api/v1/namespaces/deployment-9970/pods/webserver-deployment-795d758f88-87kr7 3b8c4f40-9ab6-4a9d-ab89-5ab019c29669 1624853 0 2020-09-29 12:10:22 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 f127b8a2-9da7-448b-9298-c31b63251aea 0xc002b055a7 0xc002b055a8}] [] [{kube-controller-manager Update v1 2020-09-29 12:10:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f127b8a2-9da7-448b-9298-c31b63251aea\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghkqt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghkqt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghkqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 29 12:10:22.588: INFO: Pod "webserver-deployment-795d758f88-gn2wc" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-gn2wc webserver-deployment-795d758f88- deployment-9970 /api/v1/namespaces/deployment-9970/pods/webserver-deployment-795d758f88-gn2wc 835d34eb-4225-4f51-a8a5-d6978ed2abc9 1624855 0 2020-09-29 12:10:21 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 f127b8a2-9da7-448b-9298-c31b63251aea 0xc002b056e7 0xc002b056e8}] [] [{kube-controller-manager Update v1 2020-09-29 12:10:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f127b8a2-9da7-448b-9298-c31b63251aea\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-29 12:10:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghkqt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghkqt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghkqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-09-29 12:10:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 29 12:10:22.588: INFO: Pod "webserver-deployment-795d758f88-hmjds" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-hmjds webserver-deployment-795d758f88- deployment-9970 /api/v1/namespaces/deployment-9970/pods/webserver-deployment-795d758f88-hmjds f00a0a78-9f11-4abc-8b44-0b8911856c67 1624864 0 2020-09-29 12:10:22 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 f127b8a2-9da7-448b-9298-c31b63251aea 0xc002b05897 0xc002b05898}] [] [{kube-controller-manager Update v1 2020-09-29 12:10:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f127b8a2-9da7-448b-9298-c31b63251aea\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghkqt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghkqt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghkqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 29 12:10:22.588: INFO: Pod "webserver-deployment-795d758f88-j6cnh" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-j6cnh webserver-deployment-795d758f88- deployment-9970 /api/v1/namespaces/deployment-9970/pods/webserver-deployment-795d758f88-j6cnh daab8edf-80cc-44f6-a5cf-bac1332f0848 1624817 0 2020-09-29 12:10:21 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 f127b8a2-9da7-448b-9298-c31b63251aea 0xc002b059d7 0xc002b059d8}] [] [{kube-controller-manager Update v1 2020-09-29 12:10:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f127b8a2-9da7-448b-9298-c31b63251aea\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghkqt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghkqt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghkqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 29 12:10:22.589: INFO: Pod "webserver-deployment-795d758f88-j7nvp" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-j7nvp webserver-deployment-795d758f88- deployment-9970 /api/v1/namespaces/deployment-9970/pods/webserver-deployment-795d758f88-j7nvp ef264d4f-aa9e-4c67-ada8-2639f24308df 1624839 0 2020-09-29 12:10:22 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 f127b8a2-9da7-448b-9298-c31b63251aea 0xc002b05b17 0xc002b05b18}] [] [{kube-controller-manager Update v1 2020-09-29 12:10:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f127b8a2-9da7-448b-9298-c31b63251aea\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghkqt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghkqt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghkqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 29 12:10:22.589: INFO: Pod "webserver-deployment-795d758f88-jgbgq" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-jgbgq webserver-deployment-795d758f88- deployment-9970 /api/v1/namespaces/deployment-9970/pods/webserver-deployment-795d758f88-jgbgq e8d6dd7c-cad0-4894-908c-8e3141e91f3d 1624840 0 2020-09-29 12:10:22 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 f127b8a2-9da7-448b-9298-c31b63251aea 0xc002b05c57 0xc002b05c58}] [] [{kube-controller-manager Update v1 2020-09-29 12:10:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f127b8a2-9da7-448b-9298-c31b63251aea\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghkqt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghkqt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghkqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 29 12:10:22.589: INFO: Pod "webserver-deployment-795d758f88-phcb2" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-phcb2 webserver-deployment-795d758f88- deployment-9970 /api/v1/namespaces/deployment-9970/pods/webserver-deployment-795d758f88-phcb2 21f4b380-9d4f-4a86-af84-2957c0b25451 1624852 0 2020-09-29 12:10:22 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 f127b8a2-9da7-448b-9298-c31b63251aea 0xc002b05d97 0xc002b05d98}] [] [{kube-controller-manager Update v1 2020-09-29 12:10:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f127b8a2-9da7-448b-9298-c31b63251aea\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghkqt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghkqt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghkqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 29 12:10:22.589: INFO: Pod "webserver-deployment-795d758f88-rmqbw" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-rmqbw webserver-deployment-795d758f88- deployment-9970 /api/v1/namespaces/deployment-9970/pods/webserver-deployment-795d758f88-rmqbw 1e7aab3f-c2de-47af-bc46-54e105b7d23c 1624756 0 2020-09-29 12:10:19 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 f127b8a2-9da7-448b-9298-c31b63251aea 0xc002b05ed7 0xc002b05ed8}] [] [{kube-controller-manager Update v1 2020-09-29 12:10:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f127b8a2-9da7-448b-9298-c31b63251aea\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-29 12:10:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghkqt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghkqt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghkqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-09-29 12:10:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 29 12:10:22.590: INFO: Pod "webserver-deployment-795d758f88-sh7xg" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-sh7xg webserver-deployment-795d758f88- deployment-9970 /api/v1/namespaces/deployment-9970/pods/webserver-deployment-795d758f88-sh7xg 0b61c651-92a2-4e59-a1da-0546a19ead1f 1624829 0 2020-09-29 12:10:21 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 f127b8a2-9da7-448b-9298-c31b63251aea 0xc003da0597 0xc003da0598}] [] [{kube-controller-manager Update v1 2020-09-29 12:10:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f127b8a2-9da7-448b-9298-c31b63251aea\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghkqt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghkqt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghkqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 29 12:10:22.590: INFO: Pod "webserver-deployment-795d758f88-ss22x" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-ss22x webserver-deployment-795d758f88- deployment-9970 /api/v1/namespaces/deployment-9970/pods/webserver-deployment-795d758f88-ss22x 17bf1d41-a380-41ad-a997-3cfe713b575c 1624760 0 2020-09-29 12:10:19 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 f127b8a2-9da7-448b-9298-c31b63251aea 0xc003da06e7 0xc003da06e8}] [] [{kube-controller-manager Update v1 2020-09-29 12:10:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f127b8a2-9da7-448b-9298-c31b63251aea\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-29 12:10:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghkqt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghkqt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghkqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-09-29 12:10:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 29 12:10:22.590: INFO: Pod "webserver-deployment-795d758f88-zr5zb" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-zr5zb webserver-deployment-795d758f88- deployment-9970 /api/v1/namespaces/deployment-9970/pods/webserver-deployment-795d758f88-zr5zb d77039e0-ee28-464f-b10c-34a59780169d 1624783 0 2020-09-29 12:10:19 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 f127b8a2-9da7-448b-9298-c31b63251aea 0xc003da0b57 0xc003da0b58}] [] [{kube-controller-manager Update v1 2020-09-29 12:10:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f127b8a2-9da7-448b-9298-c31b63251aea\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-29 12:10:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghkqt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghkqt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghkqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-09-29 12:10:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 29 12:10:22.590: INFO: Pod "webserver-deployment-dd94f59b7-47w7h" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-47w7h webserver-deployment-dd94f59b7- deployment-9970 /api/v1/namespaces/deployment-9970/pods/webserver-deployment-dd94f59b7-47w7h c0527176-2e13-4f58-8742-e401284c0dee 1624691 0 2020-09-29 12:10:07 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 fb62a52a-ccf7-4d22-899a-07e3346d14b2 0xc003da1027 0xc003da1028}] [] [{kube-controller-manager Update v1 2020-09-29 12:10:07 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fb62a52a-ccf7-4d22-899a-07e3346d14b2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-29 12:10:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.194\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghkqt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghkqt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghkqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.2.194,StartTime:2020-09-29 12:10:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-29 12:10:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://8e855279420b7a217f2cb458fc52923122ff3a1ab997b5328baa8b885ebf6e44,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.194,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 29 12:10:22.591: INFO: Pod "webserver-deployment-dd94f59b7-4hkrt" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-4hkrt webserver-deployment-dd94f59b7- deployment-9970 /api/v1/namespaces/deployment-9970/pods/webserver-deployment-dd94f59b7-4hkrt 3b614c9c-48c2-4186-86ab-65399a5c30b3 1624854 0 2020-09-29 12:10:22 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 fb62a52a-ccf7-4d22-899a-07e3346d14b2 0xc003da11d7 0xc003da11d8}] [] [{kube-controller-manager Update v1 2020-09-29 12:10:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fb62a52a-ccf7-4d22-899a-07e3346d14b2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghkqt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghkqt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghkqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 29 12:10:22.591: INFO: Pod "webserver-deployment-dd94f59b7-68stp" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-68stp webserver-deployment-dd94f59b7- deployment-9970 /api/v1/namespaces/deployment-9970/pods/webserver-deployment-dd94f59b7-68stp 6e65027b-adf0-47cf-af2e-84f4513b3fa3 1624648 0 2020-09-29 12:10:07 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 fb62a52a-ccf7-4d22-899a-07e3346d14b2 0xc003da1437 0xc003da1438}] [] [{kube-controller-manager Update v1 2020-09-29 12:10:07 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fb62a52a-ccf7-4d22-899a-07e3346d14b2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-29 12:10:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.193\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghkqt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghkqt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghkqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.2.193,StartTime:2020-09-29 12:10:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-29 12:10:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ef33beea3c88f9a2f677b7e921b0a0210b1796d3c54a8c23a675ae8503c4761a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.193,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 29 12:10:22.591: INFO: Pod "webserver-deployment-dd94f59b7-6nphp" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-6nphp webserver-deployment-dd94f59b7- deployment-9970 /api/v1/namespaces/deployment-9970/pods/webserver-deployment-dd94f59b7-6nphp 0a3179d4-b43b-404e-8111-3892fa59fd24 1624712 0 2020-09-29 12:10:07 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 fb62a52a-ccf7-4d22-899a-07e3346d14b2 0xc003da1757 0xc003da1758}] [] [{kube-controller-manager Update v1 2020-09-29 12:10:07 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fb62a52a-ccf7-4d22-899a-07e3346d14b2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-29 12:10:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.195\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghkqt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghkqt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghkqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.2.195,StartTime:2020-09-29 12:10:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-29 12:10:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://b63fdeaec4dcfd32b25d1a6ab0972082003f401f6551e8c339e100625cb9c5c0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.195,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 29 12:10:22.591: INFO: Pod "webserver-deployment-dd94f59b7-6tnrt" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-6tnrt webserver-deployment-dd94f59b7- deployment-9970 /api/v1/namespaces/deployment-9970/pods/webserver-deployment-dd94f59b7-6tnrt e671c2d1-de01-473c-8160-17b0503128fb 1624858 0 2020-09-29 12:10:22 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 fb62a52a-ccf7-4d22-899a-07e3346d14b2 0xc003da1907 0xc003da1908}] [] [{kube-controller-manager Update v1 2020-09-29 12:10:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fb62a52a-ccf7-4d22-899a-07e3346d14b2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghkqt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghkqt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghkqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 29 12:10:22.591: INFO: Pod "webserver-deployment-dd94f59b7-8dhcw" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-8dhcw webserver-deployment-dd94f59b7- deployment-9970 /api/v1/namespaces/deployment-9970/pods/webserver-deployment-dd94f59b7-8dhcw 273ab482-4ab7-4db2-90fa-0908cf1538ef 1624833 0 2020-09-29 12:10:21 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 fb62a52a-ccf7-4d22-899a-07e3346d14b2 0xc003da1a37 0xc003da1a38}] [] [{kube-controller-manager Update v1 2020-09-29 12:10:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fb62a52a-ccf7-4d22-899a-07e3346d14b2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghkqt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghkqt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghkqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 29 12:10:22.592: INFO: Pod "webserver-deployment-dd94f59b7-9ccvf" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-9ccvf webserver-deployment-dd94f59b7- deployment-9970 /api/v1/namespaces/deployment-9970/pods/webserver-deployment-dd94f59b7-9ccvf 2a4c1b15-65de-48f3-b741-4d51e8a9acb5 1624677 0 2020-09-29 12:10:07 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 fb62a52a-ccf7-4d22-899a-07e3346d14b2 0xc003da1b67 0xc003da1b68}] [] [{kube-controller-manager Update v1 2020-09-29 12:10:07 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fb62a52a-ccf7-4d22-899a-07e3346d14b2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-29 12:10:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.196\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghkqt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghkqt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghkqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.1.196,StartTime:2020-09-29 12:10:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-29 12:10:13 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://433924d3c7ce70b75f181c7c0f888c00e2a6dfaf13e027f774cd5602a981368e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.196,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 29 12:10:22.592: INFO: Pod "webserver-deployment-dd94f59b7-9p7hm" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-9p7hm webserver-deployment-dd94f59b7- deployment-9970 /api/v1/namespaces/deployment-9970/pods/webserver-deployment-dd94f59b7-9p7hm 194770f0-8c07-4ba3-a9c8-f41aae009a0f 1624873 0 2020-09-29 12:10:21 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 fb62a52a-ccf7-4d22-899a-07e3346d14b2 0xc003da1d27 0xc003da1d28}] [] [{kube-controller-manager Update v1 2020-09-29 12:10:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fb62a52a-ccf7-4d22-899a-07e3346d14b2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-29 12:10:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghkqt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghkqt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghkqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-09-29 12:10:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 29 12:10:22.592: INFO: Pod "webserver-deployment-dd94f59b7-gvbqt" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-gvbqt webserver-deployment-dd94f59b7- deployment-9970 /api/v1/namespaces/deployment-9970/pods/webserver-deployment-dd94f59b7-gvbqt a1da81d1-8d29-454c-b17b-bb9b57c4c31e 1624837 0 2020-09-29 12:10:22 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 fb62a52a-ccf7-4d22-899a-07e3346d14b2 0xc003da1eb7 0xc003da1eb8}] [] [{kube-controller-manager Update v1 2020-09-29 12:10:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fb62a52a-ccf7-4d22-899a-07e3346d14b2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghkqt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghkqt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghkqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 29 12:10:22.592: INFO: Pod "webserver-deployment-dd94f59b7-hhccg" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-hhccg webserver-deployment-dd94f59b7- deployment-9970 /api/v1/namespaces/deployment-9970/pods/webserver-deployment-dd94f59b7-hhccg 4eb98aca-3a5a-4e86-8220-19781683b8f0 1624692 0 2020-09-29 12:10:07 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 fb62a52a-ccf7-4d22-899a-07e3346d14b2 0xc004a08007 0xc004a08008}] [] [{kube-controller-manager Update v1 2020-09-29 12:10:07 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fb62a52a-ccf7-4d22-899a-07e3346d14b2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-29 12:10:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.197\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghkqt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghkqt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghkqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.1.197,StartTime:2020-09-29 12:10:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-29 12:10:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://032d80fe834110bee1bee63184bdb1c526574702b050d04f47cd9dfd85dac8d8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.197,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 29 12:10:22.592: INFO: Pod "webserver-deployment-dd94f59b7-mxcfm" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-mxcfm webserver-deployment-dd94f59b7- deployment-9970 /api/v1/namespaces/deployment-9970/pods/webserver-deployment-dd94f59b7-mxcfm 07fa966b-faec-4f84-a446-d7f1d9b021a8 1624832 0 2020-09-29 12:10:21 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 fb62a52a-ccf7-4d22-899a-07e3346d14b2 0xc004a081b7 0xc004a081b8}] [] [{kube-controller-manager Update v1 2020-09-29 12:10:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fb62a52a-ccf7-4d22-899a-07e3346d14b2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghkqt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghkqt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghkqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 29 12:10:22.593: INFO: Pod "webserver-deployment-dd94f59b7-nhm7z" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-nhm7z webserver-deployment-dd94f59b7- deployment-9970 /api/v1/namespaces/deployment-9970/pods/webserver-deployment-dd94f59b7-nhm7z 4391353b-10dd-4a00-97ed-04fc2f99ae4c 1624702 0 2020-09-29 12:10:07 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 fb62a52a-ccf7-4d22-899a-07e3346d14b2 0xc004a082e7 0xc004a082e8}] [] [{kube-controller-manager Update v1 2020-09-29 12:10:07 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fb62a52a-ccf7-4d22-899a-07e3346d14b2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-29 12:10:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.196\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghkqt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghkqt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghkqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.2.196,StartTime:2020-09-29 12:10:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-29 12:10:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://894cbfacf44cb468a29862fc3ad4203b3ab49c18aa7248a991a8e0ec992ddf8c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.196,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 29 12:10:22.593: INFO: Pod "webserver-deployment-dd94f59b7-pggh2" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-pggh2 webserver-deployment-dd94f59b7- deployment-9970 /api/v1/namespaces/deployment-9970/pods/webserver-deployment-dd94f59b7-pggh2 7915faf9-a597-47b5-9921-7e3e02f5b060 1624876 0 2020-09-29 12:10:21 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 fb62a52a-ccf7-4d22-899a-07e3346d14b2 0xc004a08497 0xc004a08498}] [] [{kube-controller-manager Update v1 2020-09-29 12:10:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fb62a52a-ccf7-4d22-899a-07e3346d14b2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-29 12:10:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghkqt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghkqt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghkqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:22 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-09-29 12:10:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 29 12:10:22.593: INFO: Pod "webserver-deployment-dd94f59b7-qfltr" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-qfltr webserver-deployment-dd94f59b7- deployment-9970 /api/v1/namespaces/deployment-9970/pods/webserver-deployment-dd94f59b7-qfltr 202a4ea8-95cb-4620-9fca-a6aa7c6f31f1 1624834 0 2020-09-29 12:10:21 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 fb62a52a-ccf7-4d22-899a-07e3346d14b2 0xc004a08627 0xc004a08628}] [] [{kube-controller-manager Update v1 2020-09-29 12:10:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fb62a52a-ccf7-4d22-899a-07e3346d14b2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghkqt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghkqt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghkqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 29 12:10:22.593: INFO: Pod "webserver-deployment-dd94f59b7-qnf2v" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-qnf2v webserver-deployment-dd94f59b7- deployment-9970 /api/v1/namespaces/deployment-9970/pods/webserver-deployment-dd94f59b7-qnf2v 899e8df5-64ac-434c-893d-5fae96ddfbe6 1624838 0 2020-09-29 12:10:21 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 fb62a52a-ccf7-4d22-899a-07e3346d14b2 0xc004a08757 0xc004a08758}] [] [{kube-controller-manager Update v1 2020-09-29 12:10:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fb62a52a-ccf7-4d22-899a-07e3346d14b2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-29 12:10:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghkqt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghkqt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghkqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-09-29 12:10:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 29 12:10:22.593: INFO: Pod "webserver-deployment-dd94f59b7-s2z9w" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-s2z9w webserver-deployment-dd94f59b7- deployment-9970 /api/v1/namespaces/deployment-9970/pods/webserver-deployment-dd94f59b7-s2z9w 036e58e7-600f-4797-a6d4-e27bbfcea3dd 1624709 0 2020-09-29 12:10:07 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 fb62a52a-ccf7-4d22-899a-07e3346d14b2 0xc004a088e7 0xc004a088e8}] [] [{kube-controller-manager Update v1 2020-09-29 12:10:07 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fb62a52a-ccf7-4d22-899a-07e3346d14b2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-29 12:10:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.197\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghkqt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghkqt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghkqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.2.197,StartTime:2020-09-29 12:10:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-29 12:10:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://57599b17d8f14b4e7b55eb617a383ceeb97643db08690624bc63beb896a0b5a8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.197,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 29 12:10:22.593: INFO: Pod "webserver-deployment-dd94f59b7-stnrq" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-stnrq webserver-deployment-dd94f59b7- deployment-9970 /api/v1/namespaces/deployment-9970/pods/webserver-deployment-dd94f59b7-stnrq 49fd359e-b34f-41c3-b94e-d90cde7ae37b 1624835 0 2020-09-29 12:10:21 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 fb62a52a-ccf7-4d22-899a-07e3346d14b2 0xc004a08a97 0xc004a08a98}] [] [{kube-controller-manager Update v1 2020-09-29 12:10:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fb62a52a-ccf7-4d22-899a-07e3346d14b2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghkqt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghkqt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghkqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 29 12:10:22.594: INFO: Pod "webserver-deployment-dd94f59b7-vfg88" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-vfg88 webserver-deployment-dd94f59b7- deployment-9970 /api/v1/namespaces/deployment-9970/pods/webserver-deployment-dd94f59b7-vfg88 df25c7d2-8435-4b63-8003-7d427b9440e1 1624703 0 2020-09-29 12:10:07 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 fb62a52a-ccf7-4d22-899a-07e3346d14b2 0xc004a08bc7 0xc004a08bc8}] [] [{kube-controller-manager Update v1 2020-09-29 12:10:07 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fb62a52a-ccf7-4d22-899a-07e3346d14b2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-09-29 12:10:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.198\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghkqt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghkqt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghkqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.1.198,StartTime:2020-09-29 12:10:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-09-29 12:10:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://5329303317b6f4e7dc7428a80cdfb616448e6c71df8fa2094859b0694265bf37,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.198,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 29 12:10:22.594: INFO: Pod "webserver-deployment-dd94f59b7-vv5xk" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-vv5xk webserver-deployment-dd94f59b7- deployment-9970 /api/v1/namespaces/deployment-9970/pods/webserver-deployment-dd94f59b7-vv5xk 94451df8-1c25-4007-bc17-4814899f4ee3 1624856 0 2020-09-29 12:10:22 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 fb62a52a-ccf7-4d22-899a-07e3346d14b2 0xc004a08d77 0xc004a08d78}] [] [{kube-controller-manager Update v1 2020-09-29 12:10:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fb62a52a-ccf7-4d22-899a-07e3346d14b2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghkqt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghkqt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghkqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Sep 29 12:10:22.594: INFO: Pod "webserver-deployment-dd94f59b7-wsgkz" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-wsgkz webserver-deployment-dd94f59b7- deployment-9970 /api/v1/namespaces/deployment-9970/pods/webserver-deployment-dd94f59b7-wsgkz c0ee75b8-5353-4d1f-aaf8-8fe68f9434e9 1624857 0 2020-09-29 12:10:22 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 fb62a52a-ccf7-4d22-899a-07e3346d14b2 0xc004a08ea7 0xc004a08ea8}] [] [{kube-controller-manager Update v1 2020-09-29 12:10:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fb62a52a-ccf7-4d22-899a-07e3346d14b2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ghkqt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ghkqt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ghkqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-09-29 12:10:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 12:10:22.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9970" for this suite. • [SLOW TEST:15.538 seconds] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":303,"completed":295,"skipped":4845,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 12:10:22.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create deployment with httpd image Sep 29 12:10:23.158: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config create -f -' Sep 29 12:10:43.629: INFO: stderr: "" Sep 29 12:10:43.629: INFO: stdout: "deployment.apps/httpd-deployment created\n" STEP: verify diff finds difference between live and declared image Sep 29 12:10:43.629: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config diff -f -' Sep 29 12:10:45.245: INFO: rc: 1 Sep 29 12:10:45.245: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config delete -f -' Sep 29 12:10:45.440: INFO: stderr: "" Sep 29 12:10:45.440: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 12:10:45.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-996" for this suite. • [SLOW TEST:22.707 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl diff /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:888 should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":303,"completed":296,"skipped":4847,"failed":0} SSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 12:10:45.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 12:10:52.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9609" for this suite. • [SLOW TEST:6.664 seconds] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":303,"completed":297,"skipped":4851,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 12:10:52.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename certificates STEP: Waiting for a default service account to be provisioned in namespace [It] should support CSR API operations [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/certificates.k8s.io STEP: getting /apis/certificates.k8s.io/v1 STEP: creating STEP: getting STEP: listing STEP: watching Sep 29 12:10:53.042: INFO: starting watch STEP: patching STEP: updating Sep 29 12:10:53.061: INFO: waiting for watch events with expected annotations Sep 29 12:10:53.061: INFO: saw patched and updated annotations STEP: getting /approval STEP: patching /approval STEP: updating /approval STEP: getting /status STEP: patching /status STEP: updating /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 12:10:53.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "certificates-5675" for this suite. •{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":303,"completed":298,"skipped":4868,"failed":0} SSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 12:10:53.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 12:10:57.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-88" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":303,"completed":299,"skipped":4872,"failed":0} ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 12:10:57.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Sep 29 12:10:57.777: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8b024c30-c937-4ee4-8c3b-d7953c199e2b" in namespace "projected-9892" to be "Succeeded or Failed" Sep 29 12:10:57.789: INFO: Pod "downwardapi-volume-8b024c30-c937-4ee4-8c3b-d7953c199e2b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.29546ms Sep 29 12:10:59.792: INFO: Pod "downwardapi-volume-8b024c30-c937-4ee4-8c3b-d7953c199e2b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015248664s Sep 29 12:11:01.796: INFO: Pod "downwardapi-volume-8b024c30-c937-4ee4-8c3b-d7953c199e2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019463287s STEP: Saw pod success Sep 29 12:11:01.796: INFO: Pod "downwardapi-volume-8b024c30-c937-4ee4-8c3b-d7953c199e2b" satisfied condition "Succeeded or Failed" Sep 29 12:11:01.799: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-8b024c30-c937-4ee4-8c3b-d7953c199e2b container client-container: STEP: delete the pod Sep 29 12:11:01.838: INFO: Waiting for pod downwardapi-volume-8b024c30-c937-4ee4-8c3b-d7953c199e2b to disappear Sep 29 12:11:01.845: INFO: Pod downwardapi-volume-8b024c30-c937-4ee4-8c3b-d7953c199e2b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 12:11:01.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9892" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":303,"completed":300,"skipped":4872,"failed":0} SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 12:11:01.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-2942 [It] should have a working scale subresource [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating statefulset ss in namespace statefulset-2942 Sep 29 12:11:01.974: INFO: Found 0 stateful pods, waiting for 1 Sep 29 12:11:11.979: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Sep 29 12:11:12.059: INFO: Deleting all statefulset in ns statefulset-2942 Sep 29 12:11:12.079: INFO: Scaling statefulset ss to 0 Sep 29 12:11:22.146: INFO: Waiting for statefulset status.replicas updated to 0 Sep 29 12:11:22.150: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 12:11:22.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2942" for this suite. • [SLOW TEST:20.356 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have a working scale subresource [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":303,"completed":301,"skipped":4874,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Discovery /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 12:11:22.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename discovery STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Discovery /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:39 STEP: Setting up server cert [It] should validate PreferredVersion for each APIGroup [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Sep 29 12:11:22.633: INFO: Checking APIGroup: apiregistration.k8s.io Sep 29 12:11:22.634: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 Sep 29 12:11:22.634: INFO: Versions found [{apiregistration.k8s.io/v1 v1} {apiregistration.k8s.io/v1beta1 v1beta1}] Sep 29 12:11:22.634: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 Sep 29 12:11:22.634: INFO: Checking APIGroup: extensions Sep 29 12:11:22.635: INFO: PreferredVersion.GroupVersion: extensions/v1beta1 Sep 29 12:11:22.635: INFO: Versions found [{extensions/v1beta1 v1beta1}] Sep 29 12:11:22.635: INFO: extensions/v1beta1 matches extensions/v1beta1 Sep 29 12:11:22.635: INFO: Checking APIGroup: apps Sep 29 12:11:22.636: INFO: PreferredVersion.GroupVersion: apps/v1 Sep 29 12:11:22.636: INFO: Versions found [{apps/v1 v1}] Sep 29 12:11:22.636: INFO: apps/v1 matches apps/v1 Sep 29 12:11:22.636: INFO: Checking APIGroup: events.k8s.io Sep 29 12:11:22.637: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 Sep 29 12:11:22.637: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] Sep 29 12:11:22.637: INFO: events.k8s.io/v1 matches events.k8s.io/v1 Sep 29 12:11:22.637: INFO: Checking APIGroup: authentication.k8s.io Sep 29 12:11:22.638: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 Sep 29 12:11:22.638: INFO: Versions found [{authentication.k8s.io/v1 v1} {authentication.k8s.io/v1beta1 v1beta1}] Sep 29 12:11:22.638: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 Sep 29 12:11:22.638: INFO: Checking APIGroup: authorization.k8s.io Sep 29 12:11:22.639: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 Sep 29 12:11:22.639: INFO: Versions found [{authorization.k8s.io/v1 v1} {authorization.k8s.io/v1beta1 v1beta1}] Sep 29 12:11:22.639: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 Sep 29 12:11:22.639: INFO: Checking APIGroup: autoscaling Sep 29 12:11:22.640: INFO: PreferredVersion.GroupVersion: autoscaling/v1 Sep 29 12:11:22.640: INFO: Versions found [{autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] Sep 29 12:11:22.640: INFO: autoscaling/v1 matches autoscaling/v1 Sep 29 12:11:22.640: INFO: Checking APIGroup: batch Sep 29 12:11:22.641: INFO: PreferredVersion.GroupVersion: batch/v1 Sep 29 12:11:22.641: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1}] Sep 29 12:11:22.641: INFO: batch/v1 matches batch/v1 Sep 29 12:11:22.641: INFO: Checking APIGroup: certificates.k8s.io Sep 29 12:11:22.641: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 Sep 29 12:11:22.641: INFO: Versions found [{certificates.k8s.io/v1 v1} {certificates.k8s.io/v1beta1 v1beta1}] Sep 29 12:11:22.641: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 Sep 29 12:11:22.641: INFO: Checking APIGroup: networking.k8s.io Sep 29 12:11:22.642: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 Sep 29 12:11:22.642: INFO: Versions found [{networking.k8s.io/v1 v1} {networking.k8s.io/v1beta1 v1beta1}] Sep 29 12:11:22.642: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 Sep 29 12:11:22.642: INFO: Checking APIGroup: policy Sep 29 12:11:22.643: INFO: PreferredVersion.GroupVersion: policy/v1beta1 Sep 29 12:11:22.643: INFO: Versions found [{policy/v1beta1 v1beta1}] Sep 29 12:11:22.643: INFO: policy/v1beta1 matches policy/v1beta1 Sep 29 12:11:22.643: INFO: Checking APIGroup: rbac.authorization.k8s.io Sep 29 12:11:22.644: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 Sep 29 12:11:22.644: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1} {rbac.authorization.k8s.io/v1beta1 v1beta1}] Sep 29 12:11:22.644: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 Sep 29 12:11:22.644: INFO: Checking APIGroup: storage.k8s.io Sep 29 12:11:22.645: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 Sep 29 12:11:22.645: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] Sep 29 12:11:22.645: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 Sep 29 12:11:22.645: INFO: Checking APIGroup: admissionregistration.k8s.io Sep 29 12:11:22.646: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 Sep 29 12:11:22.646: INFO: Versions found [{admissionregistration.k8s.io/v1 v1} {admissionregistration.k8s.io/v1beta1 v1beta1}] Sep 29 12:11:22.646: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 Sep 29 12:11:22.646: INFO: Checking APIGroup: apiextensions.k8s.io Sep 29 12:11:22.647: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 Sep 29 12:11:22.647: INFO: Versions found [{apiextensions.k8s.io/v1 v1} {apiextensions.k8s.io/v1beta1 v1beta1}] Sep 29 12:11:22.647: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 Sep 29 12:11:22.647: INFO: Checking APIGroup: scheduling.k8s.io Sep 29 12:11:22.648: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 Sep 29 12:11:22.648: INFO: Versions found [{scheduling.k8s.io/v1 v1} {scheduling.k8s.io/v1beta1 v1beta1}] Sep 29 12:11:22.648: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 Sep 29 12:11:22.648: INFO: Checking APIGroup: coordination.k8s.io Sep 29 12:11:22.649: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 Sep 29 12:11:22.649: INFO: Versions found [{coordination.k8s.io/v1 v1} {coordination.k8s.io/v1beta1 v1beta1}] Sep 29 12:11:22.649: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 Sep 29 12:11:22.649: INFO: Checking APIGroup: node.k8s.io Sep 29 12:11:22.650: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1beta1 Sep 29 12:11:22.650: INFO: Versions found [{node.k8s.io/v1beta1 v1beta1}] Sep 29 12:11:22.650: INFO: node.k8s.io/v1beta1 matches node.k8s.io/v1beta1 Sep 29 12:11:22.650: INFO: Checking APIGroup: discovery.k8s.io Sep 29 12:11:22.651: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1beta1 Sep 29 12:11:22.651: INFO: Versions found [{discovery.k8s.io/v1beta1 v1beta1}] Sep 29 12:11:22.651: INFO: discovery.k8s.io/v1beta1 matches discovery.k8s.io/v1beta1 [AfterEach] [sig-api-machinery] Discovery /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 12:11:22.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "discovery-1274" for this suite. •{"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":303,"completed":302,"skipped":4882,"failed":0} SSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Sep 29 12:11:22.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-8014 STEP: creating service affinity-clusterip-transition in namespace services-8014 STEP: creating replication controller affinity-clusterip-transition in namespace services-8014 I0929 12:11:22.853745 7 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-8014, replica count: 3 I0929 12:11:25.904144 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0929 12:11:28.904431 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Sep 29 12:11:28.945: INFO: Creating new exec pod Sep 29 12:11:34.001: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-8014 execpod-affinity9mmqv -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Sep 29 12:11:34.226: INFO: stderr: "I0929 12:11:34.137407 3440 log.go:181] (0xc000b38dc0) (0xc000163a40) Create stream\nI0929 12:11:34.137462 3440 log.go:181] (0xc000b38dc0) (0xc000163a40) Stream added, broadcasting: 1\nI0929 12:11:34.143174 3440 log.go:181] (0xc000b38dc0) Reply frame received for 1\nI0929 12:11:34.143217 3440 log.go:181] (0xc000b38dc0) (0xc000924000) Create stream\nI0929 12:11:34.143231 3440 log.go:181] (0xc000b38dc0) (0xc000924000) Stream added, broadcasting: 3\nI0929 12:11:34.144570 3440 log.go:181] (0xc000b38dc0) Reply frame received for 3\nI0929 12:11:34.144611 3440 log.go:181] (0xc000b38dc0) (0xc00070e140) Create stream\nI0929 12:11:34.144621 3440 log.go:181] (0xc000b38dc0) (0xc00070e140) Stream added, broadcasting: 5\nI0929 12:11:34.145625 3440 log.go:181] (0xc000b38dc0) Reply frame received for 5\nI0929 12:11:34.218211 3440 log.go:181] (0xc000b38dc0) Data frame received for 5\nI0929 12:11:34.218267 3440 log.go:181] (0xc00070e140) (5) Data frame handling\nI0929 12:11:34.218304 3440 log.go:181] (0xc00070e140) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip-transition 80\nI0929 12:11:34.218739 3440 log.go:181] (0xc000b38dc0) Data frame received for 5\nI0929 12:11:34.218783 3440 log.go:181] (0xc00070e140) (5) Data frame handling\nI0929 12:11:34.218806 3440 log.go:181] (0xc00070e140) (5) Data frame sent\nI0929 12:11:34.218824 3440 log.go:181] (0xc000b38dc0) Data frame received for 5\nI0929 12:11:34.218841 3440 log.go:181] (0xc00070e140) (5) Data frame handling\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\nI0929 12:11:34.218961 3440 log.go:181] (0xc000b38dc0) Data frame received for 3\nI0929 12:11:34.218996 3440 log.go:181] (0xc000924000) (3) Data frame handling\nI0929 12:11:34.221460 3440 log.go:181] (0xc000b38dc0) Data frame received for 1\nI0929 12:11:34.221490 3440 log.go:181] (0xc000163a40) (1) Data frame handling\nI0929 12:11:34.221502 3440 log.go:181] (0xc000163a40) (1) Data frame sent\nI0929 12:11:34.221517 3440 log.go:181] (0xc000b38dc0) (0xc000163a40) Stream removed, broadcasting: 1\nI0929 12:11:34.221568 3440 log.go:181] (0xc000b38dc0) Go away received\nI0929 12:11:34.221937 3440 log.go:181] (0xc000b38dc0) (0xc000163a40) Stream removed, broadcasting: 1\nI0929 12:11:34.221968 3440 log.go:181] (0xc000b38dc0) (0xc000924000) Stream removed, broadcasting: 3\nI0929 12:11:34.221987 3440 log.go:181] (0xc000b38dc0) (0xc00070e140) Stream removed, broadcasting: 5\n" Sep 29 12:11:34.226: INFO: stdout: "" Sep 29 12:11:34.227: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-8014 execpod-affinity9mmqv -- /bin/sh -x -c nc -zv -t -w 2 10.111.151.179 80' Sep 29 12:11:34.450: INFO: stderr: "I0929 12:11:34.381319 3458 log.go:181] (0xc000850000) (0xc0008e4460) Create stream\nI0929 12:11:34.381391 3458 log.go:181] (0xc000850000) (0xc0008e4460) Stream added, broadcasting: 1\nI0929 12:11:34.383408 3458 log.go:181] (0xc000850000) Reply frame received for 1\nI0929 12:11:34.383443 3458 log.go:181] (0xc000850000) (0xc0008e4d20) Create stream\nI0929 12:11:34.383455 3458 log.go:181] (0xc000850000) (0xc0008e4d20) Stream added, broadcasting: 3\nI0929 12:11:34.384339 3458 log.go:181] (0xc000850000) Reply frame received for 3\nI0929 12:11:34.384387 3458 log.go:181] (0xc000850000) (0xc000848280) Create stream\nI0929 12:11:34.384411 3458 log.go:181] (0xc000850000) (0xc000848280) Stream added, broadcasting: 5\nI0929 12:11:34.385329 3458 log.go:181] (0xc000850000) Reply frame received for 5\nI0929 12:11:34.445001 3458 log.go:181] (0xc000850000) Data frame received for 3\nI0929 12:11:34.445028 3458 log.go:181] (0xc0008e4d20) (3) Data frame handling\nI0929 12:11:34.445047 3458 log.go:181] (0xc000850000) Data frame received for 5\nI0929 12:11:34.445052 3458 log.go:181] (0xc000848280) (5) Data frame handling\nI0929 12:11:34.445058 3458 log.go:181] (0xc000848280) (5) Data frame sent\nI0929 12:11:34.445072 3458 log.go:181] (0xc000850000) Data frame received for 5\nI0929 12:11:34.445076 3458 log.go:181] (0xc000848280) (5) Data frame handling\n+ nc -zv -t -w 2 10.111.151.179 80\nConnection to 10.111.151.179 80 port [tcp/http] succeeded!\nI0929 12:11:34.446128 3458 log.go:181] (0xc000850000) Data frame received for 1\nI0929 12:11:34.446159 3458 log.go:181] (0xc0008e4460) (1) Data frame handling\nI0929 12:11:34.446169 3458 log.go:181] (0xc0008e4460) (1) Data frame sent\nI0929 12:11:34.446179 3458 log.go:181] (0xc000850000) (0xc0008e4460) Stream removed, broadcasting: 1\nI0929 12:11:34.446199 3458 log.go:181] (0xc000850000) Go away received\nI0929 12:11:34.446453 3458 log.go:181] (0xc000850000) (0xc0008e4460) Stream removed, broadcasting: 1\nI0929 12:11:34.446467 3458 log.go:181] (0xc000850000) (0xc0008e4d20) Stream removed, broadcasting: 3\nI0929 12:11:34.446476 3458 log.go:181] (0xc000850000) (0xc000848280) Stream removed, broadcasting: 5\n" Sep 29 12:11:34.450: INFO: stdout: "" Sep 29 12:11:34.457: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-8014 execpod-affinity9mmqv -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.111.151.179:80/ ; done' Sep 29 12:11:34.793: INFO: stderr: "I0929 12:11:34.597277 3476 log.go:181] (0xc000d174a0) (0xc000d0e960) Create stream\nI0929 12:11:34.597324 3476 log.go:181] (0xc000d174a0) (0xc000d0e960) Stream added, broadcasting: 1\nI0929 12:11:34.603835 3476 log.go:181] (0xc000d174a0) Reply frame received for 1\nI0929 12:11:34.603884 3476 log.go:181] (0xc000d174a0) (0xc000d0e000) Create stream\nI0929 12:11:34.603914 3476 log.go:181] (0xc000d174a0) (0xc000d0e000) Stream added, broadcasting: 3\nI0929 12:11:34.604926 3476 log.go:181] (0xc000d174a0) Reply frame received for 3\nI0929 12:11:34.604968 3476 log.go:181] (0xc000d174a0) (0xc000d0e0a0) Create stream\nI0929 12:11:34.604979 3476 log.go:181] (0xc000d174a0) (0xc000d0e0a0) Stream added, broadcasting: 5\nI0929 12:11:34.605722 3476 log.go:181] (0xc000d174a0) Reply frame received for 5\nI0929 12:11:34.673061 3476 log.go:181] (0xc000d174a0) Data frame received for 5\nI0929 12:11:34.673102 3476 log.go:181] (0xc000d0e0a0) (5) Data frame handling\nI0929 12:11:34.673117 3476 log.go:181] (0xc000d0e0a0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.151.179:80/\nI0929 12:11:34.673139 3476 log.go:181] (0xc000d174a0) Data frame received for 3\nI0929 12:11:34.673150 3476 log.go:181] (0xc000d0e000) (3) Data frame handling\nI0929 12:11:34.673161 3476 log.go:181] (0xc000d0e000) (3) Data frame sent\nI0929 12:11:34.680313 3476 log.go:181] (0xc000d174a0) Data frame received for 3\nI0929 12:11:34.680336 3476 log.go:181] (0xc000d0e000) (3) Data frame handling\nI0929 12:11:34.680356 3476 log.go:181] (0xc000d0e000) (3) Data frame sent\nI0929 12:11:34.681333 3476 log.go:181] (0xc000d174a0) Data frame received for 5\nI0929 12:11:34.681353 3476 log.go:181] (0xc000d0e0a0) (5) Data frame handling\nI0929 12:11:34.681363 3476 log.go:181] (0xc000d0e0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.151.179:80/\nI0929 12:11:34.681375 3476 log.go:181] (0xc000d174a0) Data frame received for 3\nI0929 12:11:34.681409 3476 log.go:181] (0xc000d0e000) (3) Data frame handling\nI0929 12:11:34.681424 3476 log.go:181] (0xc000d0e000) (3) Data frame sent\nI0929 12:11:34.684646 3476 log.go:181] (0xc000d174a0) Data frame received for 3\nI0929 12:11:34.684665 3476 log.go:181] (0xc000d0e000) (3) Data frame handling\nI0929 12:11:34.684684 3476 log.go:181] (0xc000d0e000) (3) Data frame sent\nI0929 12:11:34.685843 3476 log.go:181] (0xc000d174a0) Data frame received for 5\nI0929 12:11:34.685883 3476 log.go:181] (0xc000d174a0) Data frame received for 3\nI0929 12:11:34.685928 3476 log.go:181] (0xc000d0e000) (3) Data frame handling\nI0929 12:11:34.685946 3476 log.go:181] (0xc000d0e000) (3) Data frame sent\nI0929 12:11:34.685978 3476 log.go:181] (0xc000d0e0a0) (5) Data frame handling\nI0929 12:11:34.686015 3476 log.go:181] (0xc000d0e0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.151.179:80/\nI0929 12:11:34.689213 3476 log.go:181] (0xc000d174a0) Data frame received for 3\nI0929 12:11:34.689235 3476 log.go:181] (0xc000d0e000) (3) Data frame handling\nI0929 12:11:34.689254 3476 log.go:181] (0xc000d0e000) (3) Data frame sent\nI0929 12:11:34.690184 3476 log.go:181] (0xc000d174a0) Data frame received for 5\nI0929 12:11:34.690195 3476 log.go:181] (0xc000d0e0a0) (5) Data frame handling\nI0929 12:11:34.690204 3476 log.go:181] (0xc000d0e0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.151.179:80/\nI0929 12:11:34.690341 3476 log.go:181] (0xc000d174a0) Data frame received for 3\nI0929 12:11:34.690375 3476 log.go:181] (0xc000d0e000) (3) Data frame handling\nI0929 12:11:34.690413 3476 log.go:181] (0xc000d0e000) (3) Data frame sent\nI0929 12:11:34.697961 3476 log.go:181] (0xc000d174a0) Data frame received for 3\nI0929 12:11:34.697984 3476 log.go:181] (0xc000d0e000) (3) Data frame handling\nI0929 12:11:34.697998 3476 log.go:181] (0xc000d0e000) (3) Data frame sent\nI0929 12:11:34.698806 3476 log.go:181] (0xc000d174a0) Data frame received for 3\nI0929 12:11:34.698828 3476 log.go:181] (0xc000d0e000) (3) Data frame handling\nI0929 12:11:34.698852 3476 log.go:181] (0xc000d174a0) Data frame received for 5\nI0929 12:11:34.698884 3476 log.go:181] (0xc000d0e0a0) (5) Data frame handling\nI0929 12:11:34.698902 3476 log.go:181] (0xc000d0e0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.151.179:80/\nI0929 12:11:34.698921 3476 log.go:181] (0xc000d0e000) (3) Data frame sent\nI0929 12:11:34.705607 3476 log.go:181] (0xc000d174a0) Data frame received for 3\nI0929 12:11:34.705634 3476 log.go:181] (0xc000d0e000) (3) Data frame handling\nI0929 12:11:34.705656 3476 log.go:181] (0xc000d0e000) (3) Data frame sent\nI0929 12:11:34.706325 3476 log.go:181] (0xc000d174a0) Data frame received for 3\nI0929 12:11:34.706340 3476 log.go:181] (0xc000d0e000) (3) Data frame handling\nI0929 12:11:34.706347 3476 log.go:181] (0xc000d0e000) (3) Data frame sent\nI0929 12:11:34.706373 3476 log.go:181] (0xc000d174a0) Data frame received for 5\nI0929 12:11:34.706399 3476 log.go:181] (0xc000d0e0a0) (5) Data frame handling\nI0929 12:11:34.706424 3476 log.go:181] (0xc000d0e0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.151.179:80/\nI0929 12:11:34.711763 3476 log.go:181] (0xc000d174a0) Data frame received for 3\nI0929 12:11:34.711785 3476 log.go:181] (0xc000d0e000) (3) Data frame handling\nI0929 12:11:34.711802 3476 log.go:181] (0xc000d0e000) (3) Data frame sent\nI0929 12:11:34.712712 3476 log.go:181] (0xc000d174a0) Data frame received for 3\nI0929 12:11:34.712739 3476 log.go:181] (0xc000d0e000) (3) Data frame handling\nI0929 12:11:34.712752 3476 log.go:181] (0xc000d0e000) (3) Data frame sent\nI0929 12:11:34.712770 3476 log.go:181] (0xc000d174a0) Data frame received for 5\nI0929 12:11:34.712779 3476 log.go:181] (0xc000d0e0a0) (5) Data frame handling\nI0929 12:11:34.712801 3476 log.go:181] (0xc000d0e0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.151.179:80/\nI0929 12:11:34.720323 3476 log.go:181] (0xc000d174a0) Data frame received for 3\nI0929 12:11:34.720347 3476 log.go:181] (0xc000d0e000) (3) Data frame handling\nI0929 12:11:34.720365 3476 log.go:181] (0xc000d0e000) (3) Data frame sent\nI0929 12:11:34.721145 3476 log.go:181] (0xc000d174a0) Data frame received for 5\nI0929 12:11:34.721163 3476 log.go:181] (0xc000d0e0a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.151.179:80/\nI0929 12:11:34.721183 3476 log.go:181] (0xc000d174a0) Data frame received for 3\nI0929 12:11:34.721211 3476 log.go:181] (0xc000d0e000) (3) Data frame handling\nI0929 12:11:34.721225 3476 log.go:181] (0xc000d0e000) (3) Data frame sent\nI0929 12:11:34.721244 3476 log.go:181] (0xc000d0e0a0) (5) Data frame sent\nI0929 12:11:34.727813 3476 log.go:181] (0xc000d174a0) Data frame received for 3\nI0929 12:11:34.727844 3476 log.go:181] (0xc000d0e000) (3) Data frame handling\nI0929 12:11:34.727878 3476 log.go:181] (0xc000d0e000) (3) Data frame sent\nI0929 12:11:34.728336 3476 log.go:181] (0xc000d174a0) Data frame received for 3\nI0929 12:11:34.728362 3476 log.go:181] (0xc000d0e000) (3) Data frame handling\nI0929 12:11:34.728374 3476 log.go:181] (0xc000d0e000) (3) Data frame sent\nI0929 12:11:34.728390 3476 log.go:181] (0xc000d174a0) Data frame received for 5\nI0929 12:11:34.728400 3476 log.go:181] (0xc000d0e0a0) (5) Data frame handling\nI0929 12:11:34.728410 3476 log.go:181] (0xc000d0e0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.151.179:80/\nI0929 12:11:34.735319 3476 log.go:181] (0xc000d174a0) Data frame received for 3\nI0929 12:11:34.735344 3476 log.go:181] (0xc000d0e000) (3) Data frame handling\nI0929 12:11:34.735366 3476 log.go:181] (0xc000d0e000) (3) Data frame sent\nI0929 12:11:34.736165 3476 log.go:181] (0xc000d174a0) Data frame received for 3\nI0929 12:11:34.736201 3476 log.go:181] (0xc000d0e000) (3) Data frame handling\nI0929 12:11:34.736223 3476 log.go:181] (0xc000d0e000) (3) Data frame sent\nI0929 12:11:34.736254 3476 log.go:181] (0xc000d174a0) Data frame received for 5\nI0929 12:11:34.736280 3476 log.go:181] (0xc000d0e0a0) (5) Data frame handling\nI0929 12:11:34.736300 3476 log.go:181] (0xc000d0e0a0) (5) Data frame sent\nI0929 12:11:34.736311 3476 log.go:181] (0xc000d174a0) Data frame received for 5\nI0929 12:11:34.736323 3476 log.go:181] (0xc000d0e0a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.151.179:80/\nI0929 12:11:34.736350 3476 log.go:181] (0xc000d0e0a0) (5) Data frame sent\nI0929 12:11:34.743378 3476 log.go:181] (0xc000d174a0) Data frame received for 3\nI0929 12:11:34.743397 3476 log.go:181] (0xc000d0e000) (3) Data frame handling\nI0929 12:11:34.743413 3476 log.go:181] (0xc000d0e000) (3) Data frame sent\nI0929 12:11:34.743979 3476 log.go:181] (0xc000d174a0) Data frame received for 3\nI0929 12:11:34.744012 3476 log.go:181] (0xc000d0e000) (3) Data frame handling\nI0929 12:11:34.744025 3476 log.go:181] (0xc000d0e000) (3) Data frame sent\nI0929 12:11:34.744045 3476 log.go:181] (0xc000d174a0) Data frame received for 5\nI0929 12:11:34.744055 3476 log.go:181] (0xc000d0e0a0) (5) Data frame handling\nI0929 12:11:34.744067 3476 log.go:181] (0xc000d0e0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.151.179:80/\nI0929 12:11:34.749680 3476 log.go:181] (0xc000d174a0) Data frame received for 3\nI0929 12:11:34.749699 3476 log.go:181] (0xc000d0e000) (3) Data frame handling\nI0929 12:11:34.749718 3476 log.go:181] (0xc000d0e000) (3) Data frame sent\nI0929 12:11:34.750313 3476 log.go:181] (0xc000d174a0) Data frame received for 3\nI0929 12:11:34.750332 3476 log.go:181] (0xc000d0e000) (3) Data frame handling\nI0929 12:11:34.750358 3476 log.go:181] (0xc000d174a0) Data frame received for 5\nI0929 12:11:34.750409 3476 log.go:181] (0xc000d0e0a0) (5) Data frame handling\nI0929 12:11:34.750434 3476 log.go:181] (0xc000d0e0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.151.179:80/\nI0929 12:11:34.750459 3476 log.go:181] (0xc000d0e000) (3) Data frame sent\nI0929 12:11:34.756135 3476 log.go:181] (0xc000d174a0) Data frame received for 3\nI0929 12:11:34.756160 3476 log.go:181] (0xc000d0e000) (3) Data frame handling\nI0929 12:11:34.756180 3476 log.go:181] (0xc000d0e000) (3) Data frame sent\nI0929 12:11:34.756519 3476 log.go:181] (0xc000d174a0) Data frame received for 3\nI0929 12:11:34.756538 3476 log.go:181] (0xc000d0e000) (3) Data frame handling\nI0929 12:11:34.756561 3476 log.go:181] (0xc000d174a0) Data frame received for 5\nI0929 12:11:34.756586 3476 log.go:181] (0xc000d0e0a0) (5) Data frame handling\nI0929 12:11:34.756599 3476 log.go:181] (0xc000d0e0a0) (5) Data frame sent\nI0929 12:11:34.756610 3476 log.go:181] (0xc000d174a0) Data frame received for 5\nI0929 12:11:34.756623 3476 log.go:181] (0xc000d0e0a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.151.179:80/\nI0929 12:11:34.756677 3476 log.go:181] (0xc000d0e000) (3) Data frame sent\nI0929 12:11:34.756723 3476 log.go:181] (0xc000d0e0a0) (5) Data frame sent\nI0929 12:11:34.762270 3476 log.go:181] (0xc000d174a0) Data frame received for 3\nI0929 12:11:34.762300 3476 log.go:181] (0xc000d0e000) (3) Data frame handling\nI0929 12:11:34.762324 3476 log.go:181] (0xc000d0e000) (3) Data frame sent\nI0929 12:11:34.763078 3476 log.go:181] (0xc000d174a0) Data frame received for 3\nI0929 12:11:34.763108 3476 log.go:181] (0xc000d174a0) Data frame received for 5\nI0929 12:11:34.763137 3476 log.go:181] (0xc000d0e0a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.151.179:80/\nI0929 12:11:34.763155 3476 log.go:181] (0xc000d0e000) (3) Data frame handling\nI0929 12:11:34.763178 3476 log.go:181] (0xc000d0e000) (3) Data frame sent\nI0929 12:11:34.763196 3476 log.go:181] (0xc000d0e0a0) (5) Data frame sent\nI0929 12:11:34.770061 3476 log.go:181] (0xc000d174a0) Data frame received for 3\nI0929 12:11:34.770088 3476 log.go:181] (0xc000d0e000) (3) Data frame handling\nI0929 12:11:34.770108 3476 log.go:181] (0xc000d0e000) (3) Data frame sent\nI0929 12:11:34.770523 3476 log.go:181] (0xc000d174a0) Data frame received for 5\nI0929 12:11:34.770551 3476 log.go:181] (0xc000d0e0a0) (5) Data frame handling\nI0929 12:11:34.770566 3476 log.go:181] (0xc000d0e0a0) (5) Data frame sent\nI0929 12:11:34.770580 3476 log.go:181] (0xc000d174a0) Data frame received for 5\nI0929 12:11:34.770596 3476 log.go:181] (0xc000d0e0a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.151.179:80/\nI0929 12:11:34.770628 3476 log.go:181] (0xc000d174a0) Data frame received for 3\nI0929 12:11:34.770660 3476 log.go:181] (0xc000d0e000) (3) Data frame handling\nI0929 12:11:34.770673 3476 log.go:181] (0xc000d0e000) (3) Data frame sent\nI0929 12:11:34.770692 3476 log.go:181] (0xc000d0e0a0) (5) Data frame sent\nI0929 12:11:34.776141 3476 log.go:181] (0xc000d174a0) Data frame received for 3\nI0929 12:11:34.776164 3476 log.go:181] (0xc000d0e000) (3) Data frame handling\nI0929 12:11:34.776187 3476 log.go:181] (0xc000d0e000) (3) Data frame sent\nI0929 12:11:34.776689 3476 log.go:181] (0xc000d174a0) Data frame received for 5\nI0929 12:11:34.776707 3476 log.go:181] (0xc000d0e0a0) (5) Data frame handling\nI0929 12:11:34.776714 3476 log.go:181] (0xc000d0e0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.151.179:80/\nI0929 12:11:34.776741 3476 log.go:181] (0xc000d174a0) Data frame received for 3\nI0929 12:11:34.776813 3476 log.go:181] (0xc000d0e000) (3) Data frame handling\nI0929 12:11:34.776972 3476 log.go:181] (0xc000d0e000) (3) Data frame sent\nI0929 12:11:34.783908 3476 log.go:181] (0xc000d174a0) Data frame received for 3\nI0929 12:11:34.783925 3476 log.go:181] (0xc000d0e000) (3) Data frame handling\nI0929 12:11:34.783932 3476 log.go:181] (0xc000d0e000) (3) Data frame sent\nI0929 12:11:34.785081 3476 log.go:181] (0xc000d174a0) Data frame received for 5\nI0929 12:11:34.785124 3476 log.go:181] (0xc000d0e0a0) (5) Data frame handling\nI0929 12:11:34.785156 3476 log.go:181] (0xc000d174a0) Data frame received for 3\nI0929 12:11:34.785210 3476 log.go:181] (0xc000d0e000) (3) Data frame handling\nI0929 12:11:34.787252 3476 log.go:181] (0xc000d174a0) Data frame received for 1\nI0929 12:11:34.787276 3476 log.go:181] (0xc000d0e960) (1) Data frame handling\nI0929 12:11:34.787294 3476 log.go:181] (0xc000d0e960) (1) Data frame sent\nI0929 12:11:34.787315 3476 log.go:181] (0xc000d174a0) (0xc000d0e960) Stream removed, broadcasting: 1\nI0929 12:11:34.787339 3476 log.go:181] (0xc000d174a0) Go away received\nI0929 12:11:34.787867 3476 log.go:181] (0xc000d174a0) (0xc000d0e960) Stream removed, broadcasting: 1\nI0929 12:11:34.787894 3476 log.go:181] (0xc000d174a0) (0xc000d0e000) Stream removed, broadcasting: 3\nI0929 12:11:34.787907 3476 log.go:181] (0xc000d174a0) (0xc000d0e0a0) Stream removed, broadcasting: 5\n" Sep 29 12:11:34.793: INFO: stdout: "\naffinity-clusterip-transition-s7v5j\naffinity-clusterip-transition-s7v5j\naffinity-clusterip-transition-s7v5j\naffinity-clusterip-transition-vsvl4\naffinity-clusterip-transition-s7v5j\naffinity-clusterip-transition-s7v5j\naffinity-clusterip-transition-hvqvv\naffinity-clusterip-transition-vsvl4\naffinity-clusterip-transition-hvqvv\naffinity-clusterip-transition-s7v5j\naffinity-clusterip-transition-vsvl4\naffinity-clusterip-transition-vsvl4\naffinity-clusterip-transition-hvqvv\naffinity-clusterip-transition-vsvl4\naffinity-clusterip-transition-vsvl4\naffinity-clusterip-transition-s7v5j" Sep 29 12:11:34.793: INFO: Received response from host: affinity-clusterip-transition-s7v5j Sep 29 12:11:34.793: INFO: Received response from host: affinity-clusterip-transition-s7v5j Sep 29 12:11:34.793: INFO: Received response from host: affinity-clusterip-transition-s7v5j Sep 29 12:11:34.793: INFO: Received response from host: affinity-clusterip-transition-vsvl4 Sep 29 12:11:34.793: INFO: Received response from host: affinity-clusterip-transition-s7v5j Sep 29 12:11:34.793: INFO: Received response from host: affinity-clusterip-transition-s7v5j Sep 29 12:11:34.793: INFO: Received response from host: affinity-clusterip-transition-hvqvv Sep 29 12:11:34.793: INFO: Received response from host: affinity-clusterip-transition-vsvl4 Sep 29 12:11:34.794: INFO: Received response from host: affinity-clusterip-transition-hvqvv Sep 29 12:11:34.794: INFO: Received response from host: affinity-clusterip-transition-s7v5j Sep 29 12:11:34.794: INFO: Received response from host: affinity-clusterip-transition-vsvl4 Sep 29 12:11:34.794: INFO: Received response from host: affinity-clusterip-transition-vsvl4 Sep 29 12:11:34.794: INFO: Received response from host: affinity-clusterip-transition-hvqvv Sep 29 12:11:34.794: INFO: Received response from host: affinity-clusterip-transition-vsvl4 Sep 29 12:11:34.794: INFO: Received response from host: affinity-clusterip-transition-vsvl4 Sep 29 12:11:34.794: INFO: Received response from host: affinity-clusterip-transition-s7v5j Sep 29 12:11:34.803: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-8014 execpod-affinity9mmqv -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.111.151.179:80/ ; done' Sep 29 12:11:35.135: INFO: stderr: "I0929 12:11:34.951021 3494 log.go:181] (0xc000960f20) (0xc000634a00) Create stream\nI0929 12:11:34.951074 3494 log.go:181] (0xc000960f20) (0xc000634a00) Stream added, broadcasting: 1\nI0929 12:11:34.955705 3494 log.go:181] (0xc000960f20) Reply frame received for 1\nI0929 12:11:34.955746 3494 log.go:181] (0xc000960f20) (0xc000635540) Create stream\nI0929 12:11:34.955755 3494 log.go:181] (0xc000960f20) (0xc000635540) Stream added, broadcasting: 3\nI0929 12:11:34.956954 3494 log.go:181] (0xc000960f20) Reply frame received for 3\nI0929 12:11:34.957030 3494 log.go:181] (0xc000960f20) (0xc0004120a0) Create stream\nI0929 12:11:34.957063 3494 log.go:181] (0xc000960f20) (0xc0004120a0) Stream added, broadcasting: 5\nI0929 12:11:34.957995 3494 log.go:181] (0xc000960f20) Reply frame received for 5\nI0929 12:11:35.019916 3494 log.go:181] (0xc000960f20) Data frame received for 3\nI0929 12:11:35.019965 3494 log.go:181] (0xc000635540) (3) Data frame handling\nI0929 12:11:35.019982 3494 log.go:181] (0xc000635540) (3) Data frame sent\nI0929 12:11:35.020013 3494 log.go:181] (0xc000960f20) Data frame received for 5\nI0929 12:11:35.020030 3494 log.go:181] (0xc0004120a0) (5) Data frame handling\nI0929 12:11:35.020053 3494 log.go:181] (0xc0004120a0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.151.179:80/\nI0929 12:11:35.024458 3494 log.go:181] (0xc000960f20) Data frame received for 3\nI0929 12:11:35.024478 3494 log.go:181] (0xc000635540) (3) Data frame handling\nI0929 12:11:35.024494 3494 log.go:181] (0xc000635540) (3) Data frame sent\nI0929 12:11:35.025315 3494 log.go:181] (0xc000960f20) Data frame received for 3\nI0929 12:11:35.025360 3494 log.go:181] (0xc000635540) (3) Data frame handling\nI0929 12:11:35.025379 3494 log.go:181] (0xc000635540) (3) Data frame sent\nI0929 12:11:35.025401 3494 log.go:181] (0xc000960f20) Data frame received for 5\nI0929 12:11:35.025418 3494 log.go:181] (0xc0004120a0) (5) Data frame handling\nI0929 12:11:35.025447 3494 log.go:181] (0xc0004120a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.151.179:80/\nI0929 12:11:35.032646 3494 log.go:181] (0xc000960f20) Data frame received for 3\nI0929 12:11:35.032667 3494 log.go:181] (0xc000635540) (3) Data frame handling\nI0929 12:11:35.032686 3494 log.go:181] (0xc000635540) (3) Data frame sent\nI0929 12:11:35.033086 3494 log.go:181] (0xc000960f20) Data frame received for 3\nI0929 12:11:35.033121 3494 log.go:181] (0xc000635540) (3) Data frame handling\nI0929 12:11:35.033142 3494 log.go:181] (0xc000635540) (3) Data frame sent\nI0929 12:11:35.033165 3494 log.go:181] (0xc000960f20) Data frame received for 5\nI0929 12:11:35.033178 3494 log.go:181] (0xc0004120a0) (5) Data frame handling\nI0929 12:11:35.033194 3494 log.go:181] (0xc0004120a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.151.179:80/\nI0929 12:11:35.036107 3494 log.go:181] (0xc000960f20) Data frame received for 3\nI0929 12:11:35.036126 3494 log.go:181] (0xc000635540) (3) Data frame handling\nI0929 12:11:35.036138 3494 log.go:181] (0xc000635540) (3) Data frame sent\nI0929 12:11:35.036639 3494 log.go:181] (0xc000960f20) Data frame received for 5\nI0929 12:11:35.036651 3494 log.go:181] (0xc0004120a0) (5) Data frame handling\nI0929 12:11:35.036661 3494 log.go:181] (0xc0004120a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.151.179:80/\nI0929 12:11:35.036673 3494 log.go:181] (0xc000960f20) Data frame received for 3\nI0929 12:11:35.036679 3494 log.go:181] (0xc000635540) (3) Data frame handling\nI0929 12:11:35.036684 3494 log.go:181] (0xc000635540) (3) Data frame sent\nI0929 12:11:35.044315 3494 log.go:181] (0xc000960f20) Data frame received for 3\nI0929 12:11:35.044328 3494 log.go:181] (0xc000635540) (3) Data frame handling\nI0929 12:11:35.044334 3494 log.go:181] (0xc000635540) (3) Data frame sent\nI0929 12:11:35.045380 3494 log.go:181] (0xc000960f20) Data frame received for 3\nI0929 12:11:35.045408 3494 log.go:181] (0xc000635540) (3) Data frame handling\nI0929 12:11:35.045419 3494 log.go:181] (0xc000635540) (3) Data frame sent\nI0929 12:11:35.045430 3494 log.go:181] (0xc000960f20) Data frame received for 5\nI0929 12:11:35.045437 3494 log.go:181] (0xc0004120a0) (5) Data frame handling\nI0929 12:11:35.045443 3494 log.go:181] (0xc0004120a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.151.179:80/\nI0929 12:11:35.051545 3494 log.go:181] (0xc000960f20) Data frame received for 3\nI0929 12:11:35.051568 3494 log.go:181] (0xc000635540) (3) Data frame handling\nI0929 12:11:35.051582 3494 log.go:181] (0xc000635540) (3) Data frame sent\nI0929 12:11:35.052145 3494 log.go:181] (0xc000960f20) Data frame received for 5\nI0929 12:11:35.052160 3494 log.go:181] (0xc0004120a0) (5) Data frame handling\nI0929 12:11:35.052168 3494 log.go:181] (0xc0004120a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.151.179:80/\nI0929 12:11:35.052181 3494 log.go:181] (0xc000960f20) Data frame received for 3\nI0929 12:11:35.052188 3494 log.go:181] (0xc000635540) (3) Data frame handling\nI0929 12:11:35.052203 3494 log.go:181] (0xc000635540) (3) Data frame sent\nI0929 12:11:35.057485 3494 log.go:181] (0xc000960f20) Data frame received for 3\nI0929 12:11:35.057504 3494 log.go:181] (0xc000635540) (3) Data frame handling\nI0929 12:11:35.057521 3494 log.go:181] (0xc000635540) (3) Data frame sent\nI0929 12:11:35.058256 3494 log.go:181] (0xc000960f20) Data frame received for 5\nI0929 12:11:35.058293 3494 log.go:181] (0xc0004120a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.151.179:80/\nI0929 12:11:35.058322 3494 log.go:181] (0xc000960f20) Data frame received for 3\nI0929 12:11:35.058364 3494 log.go:181] (0xc000635540) (3) Data frame handling\nI0929 12:11:35.058385 3494 log.go:181] (0xc000635540) (3) Data frame sent\nI0929 12:11:35.058412 3494 log.go:181] (0xc0004120a0) (5) Data frame sent\nI0929 12:11:35.064045 3494 log.go:181] (0xc000960f20) Data frame received for 3\nI0929 12:11:35.064060 3494 log.go:181] (0xc000635540) (3) Data frame handling\nI0929 12:11:35.064068 3494 log.go:181] (0xc000635540) (3) Data frame sent\nI0929 12:11:35.064638 3494 log.go:181] (0xc000960f20) Data frame received for 3\nI0929 12:11:35.064650 3494 log.go:181] (0xc000635540) (3) Data frame handling\nI0929 12:11:35.064665 3494 log.go:181] (0xc000960f20) Data frame received for 5\nI0929 12:11:35.064698 3494 log.go:181] (0xc0004120a0) (5) Data frame handling\nI0929 12:11:35.064710 3494 log.go:181] (0xc0004120a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.151.179:80/\nI0929 12:11:35.064726 3494 log.go:181] (0xc000635540) (3) Data frame sent\nI0929 12:11:35.070032 3494 log.go:181] (0xc000960f20) Data frame received for 3\nI0929 12:11:35.070046 3494 log.go:181] (0xc000635540) (3) Data frame handling\nI0929 12:11:35.070053 3494 log.go:181] (0xc000635540) (3) Data frame sent\nI0929 12:11:35.071023 3494 log.go:181] (0xc000960f20) Data frame received for 5\nI0929 12:11:35.071051 3494 log.go:181] (0xc0004120a0) (5) Data frame handling\nI0929 12:11:35.071066 3494 log.go:181] (0xc0004120a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.151.179:80/\nI0929 12:11:35.071083 3494 log.go:181] (0xc000960f20) Data frame received for 3\nI0929 12:11:35.071092 3494 log.go:181] (0xc000635540) (3) Data frame handling\nI0929 12:11:35.071103 3494 log.go:181] (0xc000635540) (3) Data frame sent\nI0929 12:11:35.076396 3494 log.go:181] (0xc000960f20) Data frame received for 3\nI0929 12:11:35.076430 3494 log.go:181] (0xc000635540) (3) Data frame handling\nI0929 12:11:35.076448 3494 log.go:181] (0xc000635540) (3) Data frame sent\nI0929 12:11:35.077357 3494 log.go:181] (0xc000960f20) Data frame received for 5\nI0929 12:11:35.077383 3494 log.go:181] (0xc000960f20) Data frame received for 3\nI0929 12:11:35.077423 3494 log.go:181] (0xc000635540) (3) Data frame handling\nI0929 12:11:35.077440 3494 log.go:181] (0xc000635540) (3) Data frame sent\nI0929 12:11:35.077455 3494 log.go:181] (0xc0004120a0) (5) Data frame handling\nI0929 12:11:35.077464 3494 log.go:181] (0xc0004120a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.151.179:80/\nI0929 12:11:35.084642 3494 log.go:181] (0xc000960f20) Data frame received for 3\nI0929 12:11:35.084676 3494 log.go:181] (0xc000635540) (3) Data frame handling\nI0929 12:11:35.084698 3494 log.go:181] (0xc000635540) (3) Data frame sent\nI0929 12:11:35.085482 3494 log.go:181] (0xc000960f20) Data frame received for 3\nI0929 12:11:35.085508 3494 log.go:181] (0xc000635540) (3) Data frame handling\nI0929 12:11:35.085528 3494 log.go:181] (0xc000635540) (3) Data frame sent\nI0929 12:11:35.085548 3494 log.go:181] (0xc000960f20) Data frame received for 5\nI0929 12:11:35.085572 3494 log.go:181] (0xc0004120a0) (5) Data frame handling\nI0929 12:11:35.085591 3494 log.go:181] (0xc0004120a0) (5) Data frame sent\n+ echo\n+ I0929 12:11:35.085607 3494 log.go:181] (0xc000960f20) Data frame received for 5\nI0929 12:11:35.085620 3494 log.go:181] (0xc0004120a0) (5) Data frame handling\nI0929 12:11:35.085636 3494 log.go:181] (0xc0004120a0) (5) Data frame sent\ncurl -q -s --connect-timeout 2 http://10.111.151.179:80/\nI0929 12:11:35.091444 3494 log.go:181] (0xc000960f20) Data frame received for 3\nI0929 12:11:35.091469 3494 log.go:181] (0xc000635540) (3) Data frame handling\nI0929 12:11:35.091493 3494 log.go:181] (0xc000635540) (3) Data frame sent\nI0929 12:11:35.092353 3494 log.go:181] (0xc000960f20) Data frame received for 3\nI0929 12:11:35.092402 3494 log.go:181] (0xc000635540) (3) Data frame handling\nI0929 12:11:35.092427 3494 log.go:181] (0xc000635540) (3) Data frame sent\nI0929 12:11:35.092469 3494 log.go:181] (0xc000960f20) Data frame received for 5\nI0929 12:11:35.092491 3494 log.go:181] (0xc0004120a0) (5) Data frame handling\nI0929 12:11:35.092520 3494 log.go:181] (0xc0004120a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.151.179:80/\nI0929 12:11:35.099133 3494 log.go:181] (0xc000960f20) Data frame received for 3\nI0929 12:11:35.099159 3494 log.go:181] (0xc000635540) (3) Data frame handling\nI0929 12:11:35.099177 3494 log.go:181] (0xc000635540) (3) Data frame sent\nI0929 12:11:35.099348 3494 log.go:181] (0xc000960f20) Data frame received for 3\nI0929 12:11:35.099365 3494 log.go:181] (0xc000635540) (3) Data frame handling\nI0929 12:11:35.099372 3494 log.go:181] (0xc000635540) (3) Data frame sent\nI0929 12:11:35.099383 3494 log.go:181] (0xc000960f20) Data frame received for 5\nI0929 12:11:35.099396 3494 log.go:181] (0xc0004120a0) (5) Data frame handling\nI0929 12:11:35.099412 3494 log.go:181] (0xc0004120a0) (5) Data frame sent\nI0929 12:11:35.099423 3494 log.go:181] (0xc000960f20) Data frame received for 5\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.151.179:80/I0929 12:11:35.099447 3494 log.go:181] (0xc0004120a0) (5) Data frame handling\nI0929 12:11:35.099460 3494 log.go:181] (0xc0004120a0) (5) Data frame sent\n\nI0929 12:11:35.106260 3494 log.go:181] (0xc000960f20) Data frame received for 3\nI0929 12:11:35.106276 3494 log.go:181] (0xc000635540) (3) Data frame handling\nI0929 12:11:35.106293 3494 log.go:181] (0xc000635540) (3) Data frame sent\nI0929 12:11:35.106844 3494 log.go:181] (0xc000960f20) Data frame received for 3\nI0929 12:11:35.106858 3494 log.go:181] (0xc000635540) (3) Data frame handling\nI0929 12:11:35.106865 3494 log.go:181] (0xc000635540) (3) Data frame sent\nI0929 12:11:35.106877 3494 log.go:181] (0xc000960f20) Data frame received for 5\nI0929 12:11:35.106887 3494 log.go:181] (0xc0004120a0) (5) Data frame handling\nI0929 12:11:35.106893 3494 log.go:181] (0xc0004120a0) (5) Data frame sent\nI0929 12:11:35.106899 3494 log.go:181] (0xc000960f20) Data frame received for 5\nI0929 12:11:35.106903 3494 log.go:181] (0xc0004120a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.151.179:80/\nI0929 12:11:35.106916 3494 log.go:181] (0xc0004120a0) (5) Data frame sent\nI0929 12:11:35.113925 3494 log.go:181] (0xc000960f20) Data frame received for 3\nI0929 12:11:35.113937 3494 log.go:181] (0xc000635540) (3) Data frame handling\nI0929 12:11:35.113943 3494 log.go:181] (0xc000635540) (3) Data frame sent\nI0929 12:11:35.114477 3494 log.go:181] (0xc000960f20) Data frame received for 3\nI0929 12:11:35.114500 3494 log.go:181] (0xc000635540) (3) Data frame handling\nI0929 12:11:35.114529 3494 log.go:181] (0xc000635540) (3) Data frame sent\nI0929 12:11:35.114562 3494 log.go:181] (0xc000960f20) Data frame received for 5\nI0929 12:11:35.114593 3494 log.go:181] (0xc0004120a0) (5) Data frame handling\nI0929 12:11:35.114617 3494 log.go:181] (0xc0004120a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.151.179:80/\nI0929 12:11:35.118797 3494 log.go:181] (0xc000960f20) Data frame received for 3\nI0929 12:11:35.118817 3494 log.go:181] (0xc000635540) (3) Data frame handling\nI0929 12:11:35.118832 3494 log.go:181] (0xc000635540) (3) Data frame sent\nI0929 12:11:35.119588 3494 log.go:181] (0xc000960f20) Data frame received for 3\nI0929 12:11:35.119611 3494 log.go:181] (0xc000635540) (3) Data frame handling\nI0929 12:11:35.119627 3494 log.go:181] (0xc000635540) (3) Data frame sent\nI0929 12:11:35.119665 3494 log.go:181] (0xc000960f20) Data frame received for 5\nI0929 12:11:35.119678 3494 log.go:181] (0xc0004120a0) (5) Data frame handling\nI0929 12:11:35.119689 3494 log.go:181] (0xc0004120a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.151.179:80/\nI0929 12:11:35.127606 3494 log.go:181] (0xc000960f20) Data frame received for 3\nI0929 12:11:35.127629 3494 log.go:181] (0xc000635540) (3) Data frame handling\nI0929 12:11:35.127646 3494 log.go:181] (0xc000635540) (3) Data frame sent\nI0929 12:11:35.128505 3494 log.go:181] (0xc000960f20) Data frame received for 5\nI0929 12:11:35.128532 3494 log.go:181] (0xc0004120a0) (5) Data frame handling\nI0929 12:11:35.128557 3494 log.go:181] (0xc000960f20) Data frame received for 3\nI0929 12:11:35.128577 3494 log.go:181] (0xc000635540) (3) Data frame handling\nI0929 12:11:35.130543 3494 log.go:181] (0xc000960f20) Data frame received for 1\nI0929 12:11:35.130562 3494 log.go:181] (0xc000634a00) (1) Data frame handling\nI0929 12:11:35.130571 3494 log.go:181] (0xc000634a00) (1) Data frame sent\nI0929 12:11:35.130585 3494 log.go:181] (0xc000960f20) (0xc000634a00) Stream removed, broadcasting: 1\nI0929 12:11:35.130598 3494 log.go:181] (0xc000960f20) Go away received\nI0929 12:11:35.131031 3494 log.go:181] (0xc000960f20) (0xc000634a00) Stream removed, broadcasting: 1\nI0929 12:11:35.131046 3494 log.go:181] (0xc000960f20) (0xc000635540) Stream removed, broadcasting: 3\nI0929 12:11:35.131051 3494 log.go:181] (0xc000960f20) (0xc0004120a0) Stream removed, broadcasting: 5\n" Sep 29 12:11:35.135: INFO: stdout: "\naffinity-clusterip-transition-hvqvv\naffinity-clusterip-transition-hvqvv\naffinity-clusterip-transition-hvqvv\naffinity-clusterip-transition-hvqvv\naffinity-clusterip-transition-hvqvv\naffinity-clusterip-transition-hvqvv\naffinity-clusterip-transition-hvqvv\naffinity-clusterip-transition-hvqvv\naffinity-clusterip-transition-hvqvv\naffinity-clusterip-transition-hvqvv\naffinity-clusterip-transition-hvqvv\naffinity-clusterip-transition-hvqvv\naffinity-clusterip-transition-hvqvv\naffinity-clusterip-transition-hvqvv\naffinity-clusterip-transition-hvqvv\naffinity-clusterip-transition-hvqvv" Sep 29 12:11:35.135: INFO: Received response from host: affinity-clusterip-transition-hvqvv Sep 29 12:11:35.135: INFO: Received response from host: affinity-clusterip-transition-hvqvv Sep 29 12:11:35.135: INFO: Received response from host: affinity-clusterip-transition-hvqvv Sep 29 12:11:35.135: INFO: Received response from host: affinity-clusterip-transition-hvqvv Sep 29 12:11:35.135: INFO: Received response from host: affinity-clusterip-transition-hvqvv Sep 29 12:11:35.135: INFO: Received response from host: affinity-clusterip-transition-hvqvv Sep 29 12:11:35.135: INFO: Received response from host: affinity-clusterip-transition-hvqvv Sep 29 12:11:35.135: INFO: Received response from host: affinity-clusterip-transition-hvqvv Sep 29 12:11:35.135: INFO: Received response from host: affinity-clusterip-transition-hvqvv Sep 29 12:11:35.135: INFO: Received response from host: affinity-clusterip-transition-hvqvv Sep 29 12:11:35.135: INFO: Received response from host: affinity-clusterip-transition-hvqvv Sep 29 12:11:35.135: INFO: Received response from host: affinity-clusterip-transition-hvqvv Sep 29 12:11:35.135: INFO: Received response from host: affinity-clusterip-transition-hvqvv Sep 29 12:11:35.135: INFO: Received response from host: affinity-clusterip-transition-hvqvv Sep 29 12:11:35.135: INFO: Received response from host: affinity-clusterip-transition-hvqvv Sep 29 12:11:35.135: INFO: Received response from host: affinity-clusterip-transition-hvqvv Sep 29 12:11:35.135: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-8014, will wait for the garbage collector to delete the pods Sep 29 12:11:35.266: INFO: Deleting ReplicationController affinity-clusterip-transition took: 26.759441ms Sep 29 12:11:35.666: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 400.225043ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Sep 29 12:11:48.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8014" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:26.135 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":303,"completed":303,"skipped":4888,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSep 29 12:11:48.809: INFO: Running AfterSuite actions on all nodes Sep 29 12:11:48.810: INFO: Running AfterSuite actions on node 1 Sep 29 12:11:48.810: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":303,"completed":303,"skipped":4929,"failed":0} Ran 303 of 5232 Specs in 6026.361 seconds SUCCESS! -- 303 Passed | 0 Failed | 0 Pending | 4929 Skipped PASS