I1118 06:18:27.002642 10 test_context.go:429] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I1118 06:18:27.008908 10 e2e.go:129] Starting e2e run "c5925752-cfbe-4b4f-859a-1581ff40fb29" on Ginkgo node 1 {"msg":"Test Suite starting","total":303,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1605680293 - Will randomize all specs Will run 303 of 5234 specs Nov 18 06:18:27.651: INFO: >>> kubeConfig: /root/.kube/config Nov 18 06:18:27.713: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Nov 18 06:18:27.910: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Nov 18 06:18:28.102: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Nov 18 06:18:28.102: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Nov 18 06:18:28.102: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Nov 18 06:18:28.157: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Nov 18 06:18:28.157: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Nov 18 06:18:28.157: INFO: e2e test version: v1.19.5-rc.0 Nov 18 06:18:28.164: INFO: kube-apiserver version: v1.19.0 Nov 18 06:18:28.167: INFO: >>> kubeConfig: /root/.kube/config Nov 18 06:18:28.193: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:18:28.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc Nov 18 06:18:28.266: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W1118 06:18:38.372539 10 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 18 06:19:40.407: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:19:40.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6154" for this suite. • [SLOW TEST:72.239 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":303,"completed":1,"skipped":44,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:19:40.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Nov 18 06:19:40.549: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a4df1df9-c207-4e30-b236-d9f8c72780a8" in namespace "downward-api-203" to be "Succeeded or Failed" Nov 18 06:19:40.557: INFO: Pod "downwardapi-volume-a4df1df9-c207-4e30-b236-d9f8c72780a8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.033003ms Nov 18 06:19:42.565: INFO: Pod "downwardapi-volume-a4df1df9-c207-4e30-b236-d9f8c72780a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015781299s Nov 18 06:19:44.576: INFO: Pod "downwardapi-volume-a4df1df9-c207-4e30-b236-d9f8c72780a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026438061s STEP: Saw pod success Nov 18 06:19:44.577: INFO: Pod "downwardapi-volume-a4df1df9-c207-4e30-b236-d9f8c72780a8" satisfied condition "Succeeded or Failed" Nov 18 06:19:44.593: INFO: Trying to get logs from node leguer-worker2 pod downwardapi-volume-a4df1df9-c207-4e30-b236-d9f8c72780a8 container client-container: STEP: delete the pod Nov 18 06:19:44.699: INFO: Waiting for pod downwardapi-volume-a4df1df9-c207-4e30-b236-d9f8c72780a8 to disappear Nov 18 06:19:44.719: INFO: Pod downwardapi-volume-a4df1df9-c207-4e30-b236-d9f8c72780a8 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:19:44.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-203" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":2,"skipped":112,"failed":0} SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:19:44.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-37e84ef7-b489-4c8b-b999-3100081bbd66 STEP: Creating a pod to test consume secrets Nov 18 06:19:45.306: INFO: Waiting up to 5m0s for pod "pod-secrets-b985fc6f-1118-4ccb-afa5-939fc60e11bb" in namespace "secrets-6472" to be "Succeeded or Failed" Nov 18 06:19:45.352: INFO: Pod "pod-secrets-b985fc6f-1118-4ccb-afa5-939fc60e11bb": Phase="Pending", Reason="", readiness=false. Elapsed: 45.759015ms Nov 18 06:19:47.818: INFO: Pod "pod-secrets-b985fc6f-1118-4ccb-afa5-939fc60e11bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.512039676s Nov 18 06:19:49.824: INFO: Pod "pod-secrets-b985fc6f-1118-4ccb-afa5-939fc60e11bb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.517353231s Nov 18 06:19:51.878: INFO: Pod "pod-secrets-b985fc6f-1118-4ccb-afa5-939fc60e11bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.571444003s STEP: Saw pod success Nov 18 06:19:51.878: INFO: Pod "pod-secrets-b985fc6f-1118-4ccb-afa5-939fc60e11bb" satisfied condition "Succeeded or Failed" Nov 18 06:19:51.911: INFO: Trying to get logs from node leguer-worker2 pod pod-secrets-b985fc6f-1118-4ccb-afa5-939fc60e11bb container secret-volume-test: STEP: delete the pod Nov 18 06:19:52.028: INFO: Waiting for pod pod-secrets-b985fc6f-1118-4ccb-afa5-939fc60e11bb to disappear Nov 18 06:19:52.109: INFO: Pod pod-secrets-b985fc6f-1118-4ccb-afa5-939fc60e11bb no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:19:52.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6472" for this suite. • [SLOW TEST:7.463 seconds] [sig-storage] Secrets /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":3,"skipped":114,"failed":0} SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:19:52.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-ed01a987-5379-4527-8e64-35eb285ccb61 STEP: Creating a pod to test consume configMaps Nov 18 06:19:52.350: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ab1c8654-9615-41f0-a138-e2cb54056646" in namespace "projected-1777" to be "Succeeded or Failed" Nov 18 06:19:52.379: INFO: Pod "pod-projected-configmaps-ab1c8654-9615-41f0-a138-e2cb54056646": Phase="Pending", Reason="", readiness=false. Elapsed: 28.983699ms Nov 18 06:19:54.388: INFO: Pod "pod-projected-configmaps-ab1c8654-9615-41f0-a138-e2cb54056646": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037786727s Nov 18 06:19:56.397: INFO: Pod "pod-projected-configmaps-ab1c8654-9615-41f0-a138-e2cb54056646": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046734105s Nov 18 06:19:58.405: INFO: Pod "pod-projected-configmaps-ab1c8654-9615-41f0-a138-e2cb54056646": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.054646672s STEP: Saw pod success Nov 18 06:19:58.405: INFO: Pod "pod-projected-configmaps-ab1c8654-9615-41f0-a138-e2cb54056646" satisfied condition "Succeeded or Failed" Nov 18 06:19:58.410: INFO: Trying to get logs from node leguer-worker2 pod pod-projected-configmaps-ab1c8654-9615-41f0-a138-e2cb54056646 container projected-configmap-volume-test: STEP: delete the pod Nov 18 06:19:58.453: INFO: Waiting for pod pod-projected-configmaps-ab1c8654-9615-41f0-a138-e2cb54056646 to disappear Nov 18 06:19:58.466: INFO: Pod pod-projected-configmaps-ab1c8654-9615-41f0-a138-e2cb54056646 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:19:58.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1777" for this suite. • [SLOW TEST:6.274 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":303,"completed":4,"skipped":119,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:19:58.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:20:16.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3273" for this suite. • [SLOW TEST:18.127 seconds] [sig-apps] Job /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":303,"completed":5,"skipped":136,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:20:16.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W1118 06:20:29.101785 10 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 18 06:21:31.131: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Nov 18 06:21:31.132: INFO: Deleting pod "simpletest-rc-to-be-deleted-7cw25" in namespace "gc-9247" Nov 18 06:21:31.184: INFO: Deleting pod "simpletest-rc-to-be-deleted-8hgtd" in namespace "gc-9247" Nov 18 06:21:31.249: INFO: Deleting pod "simpletest-rc-to-be-deleted-fbvgn" in namespace "gc-9247" Nov 18 06:21:31.432: INFO: Deleting pod "simpletest-rc-to-be-deleted-lnbw7" in namespace "gc-9247" Nov 18 06:21:32.046: INFO: Deleting pod "simpletest-rc-to-be-deleted-qx548" in namespace "gc-9247" [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:21:32.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9247" for this suite. • [SLOW TEST:76.086 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":303,"completed":6,"skipped":157,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:21:32.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs Nov 18 06:21:33.181: INFO: Waiting up to 5m0s for pod "pod-be3017f9-f48a-47c8-a7d4-eebd07fdcdb3" in namespace "emptydir-5393" to be "Succeeded or Failed" Nov 18 06:21:33.221: INFO: Pod "pod-be3017f9-f48a-47c8-a7d4-eebd07fdcdb3": Phase="Pending", Reason="", readiness=false. Elapsed: 40.327288ms Nov 18 06:21:37.095: INFO: Pod "pod-be3017f9-f48a-47c8-a7d4-eebd07fdcdb3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.914684235s Nov 18 06:21:39.197: INFO: Pod "pod-be3017f9-f48a-47c8-a7d4-eebd07fdcdb3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01639506s Nov 18 06:21:41.203: INFO: Pod "pod-be3017f9-f48a-47c8-a7d4-eebd07fdcdb3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.022570107s STEP: Saw pod success Nov 18 06:21:41.203: INFO: Pod "pod-be3017f9-f48a-47c8-a7d4-eebd07fdcdb3" satisfied condition "Succeeded or Failed" Nov 18 06:21:41.208: INFO: Trying to get logs from node leguer-worker2 pod pod-be3017f9-f48a-47c8-a7d4-eebd07fdcdb3 container test-container: STEP: delete the pod Nov 18 06:21:41.312: INFO: Waiting for pod pod-be3017f9-f48a-47c8-a7d4-eebd07fdcdb3 to disappear Nov 18 06:21:41.537: INFO: Pod pod-be3017f9-f48a-47c8-a7d4-eebd07fdcdb3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:21:41.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5393" for this suite. • [SLOW TEST:8.855 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":7,"skipped":162,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:21:41.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium Nov 18 06:21:42.134: INFO: Waiting up to 5m0s for pod "pod-57f97940-2e44-4d62-8926-4001232d6a14" in namespace "emptydir-8148" to be "Succeeded or Failed" Nov 18 06:21:42.160: INFO: Pod "pod-57f97940-2e44-4d62-8926-4001232d6a14": Phase="Pending", Reason="", readiness=false. Elapsed: 25.404998ms Nov 18 06:21:44.177: INFO: Pod "pod-57f97940-2e44-4d62-8926-4001232d6a14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042831356s Nov 18 06:21:46.694: INFO: Pod "pod-57f97940-2e44-4d62-8926-4001232d6a14": Phase="Running", Reason="", readiness=true. Elapsed: 4.559759706s Nov 18 06:21:48.701: INFO: Pod "pod-57f97940-2e44-4d62-8926-4001232d6a14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.566724972s STEP: Saw pod success Nov 18 06:21:48.702: INFO: Pod "pod-57f97940-2e44-4d62-8926-4001232d6a14" satisfied condition "Succeeded or Failed" Nov 18 06:21:48.707: INFO: Trying to get logs from node leguer-worker pod pod-57f97940-2e44-4d62-8926-4001232d6a14 container test-container: STEP: delete the pod Nov 18 06:21:48.810: INFO: Waiting for pod pod-57f97940-2e44-4d62-8926-4001232d6a14 to disappear Nov 18 06:21:48.818: INFO: Pod pod-57f97940-2e44-4d62-8926-4001232d6a14 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:21:48.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8148" for this suite. • [SLOW TEST:7.279 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":8,"skipped":166,"failed":0} SSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:21:48.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should adopt matching pods on creation [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:21:54.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7241" for this suite. • [SLOW TEST:5.377 seconds] [sig-apps] ReplicationController /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":303,"completed":9,"skipped":171,"failed":0} S ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:21:54.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Nov 18 06:21:54.285: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:22:00.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9052" for this suite. • [SLOW TEST:6.297 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":303,"completed":10,"skipped":172,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:22:00.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating secret secrets-3785/secret-test-ddfbebb6-c7bd-4655-aade-b772eb8a2e8a STEP: Creating a pod to test consume secrets Nov 18 06:22:00.591: INFO: Waiting up to 5m0s for pod "pod-configmaps-4998ceba-17c9-4280-b5a6-927ed114f44e" in namespace "secrets-3785" to be "Succeeded or Failed" Nov 18 06:22:00.600: INFO: Pod "pod-configmaps-4998ceba-17c9-4280-b5a6-927ed114f44e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.656792ms Nov 18 06:22:02.607: INFO: Pod "pod-configmaps-4998ceba-17c9-4280-b5a6-927ed114f44e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01639904s Nov 18 06:22:05.008: INFO: Pod "pod-configmaps-4998ceba-17c9-4280-b5a6-927ed114f44e": Phase="Running", Reason="", readiness=true. Elapsed: 4.417038877s Nov 18 06:22:07.016: INFO: Pod "pod-configmaps-4998ceba-17c9-4280-b5a6-927ed114f44e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.425328756s STEP: Saw pod success Nov 18 06:22:07.017: INFO: Pod "pod-configmaps-4998ceba-17c9-4280-b5a6-927ed114f44e" satisfied condition "Succeeded or Failed" Nov 18 06:22:07.022: INFO: Trying to get logs from node leguer-worker2 pod pod-configmaps-4998ceba-17c9-4280-b5a6-927ed114f44e container env-test: STEP: delete the pod Nov 18 06:22:07.069: INFO: Waiting for pod pod-configmaps-4998ceba-17c9-4280-b5a6-927ed114f44e to disappear Nov 18 06:22:07.076: INFO: Pod pod-configmaps-4998ceba-17c9-4280-b5a6-927ed114f44e no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:22:07.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3785" for this suite. • [SLOW TEST:6.590 seconds] [sig-api-machinery] Secrets /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:36 should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":303,"completed":11,"skipped":197,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:22:07.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Nov 18 06:22:07.278: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7061b434-90f7-4010-a011-49e72c74d91b" in namespace "downward-api-2928" to be "Succeeded or Failed" Nov 18 06:22:07.312: INFO: Pod "downwardapi-volume-7061b434-90f7-4010-a011-49e72c74d91b": Phase="Pending", Reason="", readiness=false. Elapsed: 33.878309ms Nov 18 06:22:09.684: INFO: Pod "downwardapi-volume-7061b434-90f7-4010-a011-49e72c74d91b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.405418205s Nov 18 06:22:11.691: INFO: Pod "downwardapi-volume-7061b434-90f7-4010-a011-49e72c74d91b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.412427339s STEP: Saw pod success Nov 18 06:22:11.691: INFO: Pod "downwardapi-volume-7061b434-90f7-4010-a011-49e72c74d91b" satisfied condition "Succeeded or Failed" Nov 18 06:22:12.059: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-7061b434-90f7-4010-a011-49e72c74d91b container client-container: STEP: delete the pod Nov 18 06:22:12.588: INFO: Waiting for pod downwardapi-volume-7061b434-90f7-4010-a011-49e72c74d91b to disappear Nov 18 06:22:12.663: INFO: Pod downwardapi-volume-7061b434-90f7-4010-a011-49e72c74d91b no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:22:12.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2928" for this suite. • [SLOW TEST:5.611 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":303,"completed":12,"skipped":226,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:22:12.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-1743 Nov 18 06:22:16.835: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-1743 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Nov 18 06:22:23.744: INFO: stderr: "I1118 06:22:23.519489 33 log.go:181] (0x40000380b0) (0x4000160000) Create stream\nI1118 06:22:23.522764 33 log.go:181] (0x40000380b0) (0x4000160000) Stream added, broadcasting: 1\nI1118 06:22:23.537555 33 log.go:181] (0x40000380b0) Reply frame received for 1\nI1118 06:22:23.538361 33 log.go:181] (0x40000380b0) (0x40008b4000) Create stream\nI1118 06:22:23.538493 33 log.go:181] (0x40000380b0) (0x40008b4000) Stream added, broadcasting: 3\nI1118 06:22:23.540613 33 log.go:181] (0x40000380b0) Reply frame received for 3\nI1118 06:22:23.541319 33 log.go:181] (0x40000380b0) (0x4000830dc0) Create stream\nI1118 06:22:23.541472 33 log.go:181] (0x40000380b0) (0x4000830dc0) Stream added, broadcasting: 5\nI1118 06:22:23.543365 33 log.go:181] (0x40000380b0) Reply frame received for 5\nI1118 06:22:23.643282 33 log.go:181] (0x40000380b0) Data frame received for 5\nI1118 06:22:23.643467 33 log.go:181] (0x4000830dc0) (5) Data frame handling\nI1118 06:22:23.643797 33 log.go:181] (0x4000830dc0) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI1118 06:22:23.721050 33 log.go:181] (0x40000380b0) Data frame received for 3\nI1118 06:22:23.721391 33 log.go:181] (0x40000380b0) Data frame received for 5\nI1118 06:22:23.721585 33 log.go:181] (0x4000830dc0) (5) Data frame handling\nI1118 06:22:23.721786 33 log.go:181] (0x40008b4000) (3) Data frame handling\nI1118 06:22:23.721941 33 log.go:181] (0x40008b4000) (3) Data frame sent\nI1118 06:22:23.722039 33 log.go:181] (0x40000380b0) Data frame received for 3\nI1118 06:22:23.722139 33 log.go:181] (0x40008b4000) (3) Data frame handling\nI1118 06:22:23.724124 33 log.go:181] (0x40000380b0) Data frame received for 1\nI1118 06:22:23.724257 33 log.go:181] (0x4000160000) (1) Data frame handling\nI1118 06:22:23.724360 33 log.go:181] (0x4000160000) (1) Data frame sent\nI1118 06:22:23.725484 33 log.go:181] (0x40000380b0) (0x4000160000) Stream removed, broadcasting: 1\nI1118 06:22:23.727606 33 log.go:181] (0x40000380b0) Go away received\nI1118 06:22:23.732810 33 log.go:181] (0x40000380b0) (0x4000160000) Stream removed, broadcasting: 1\nI1118 06:22:23.733178 33 log.go:181] (0x40000380b0) (0x40008b4000) Stream removed, broadcasting: 3\nI1118 06:22:23.733372 33 log.go:181] (0x40000380b0) (0x4000830dc0) Stream removed, broadcasting: 5\n" Nov 18 06:22:23.746: INFO: stdout: "iptables" Nov 18 06:22:23.746: INFO: proxyMode: iptables Nov 18 06:22:23.755: INFO: Waiting for pod kube-proxy-mode-detector to disappear Nov 18 06:22:23.803: INFO: Pod kube-proxy-mode-detector still exists Nov 18 06:22:25.804: INFO: Waiting for pod kube-proxy-mode-detector to disappear Nov 18 06:22:25.813: INFO: Pod kube-proxy-mode-detector still exists Nov 18 06:22:27.804: INFO: Waiting for pod kube-proxy-mode-detector to disappear Nov 18 06:22:27.811: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-1743 STEP: creating replication controller affinity-nodeport-timeout in namespace services-1743 I1118 06:22:27.887881 10 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-1743, replica count: 3 I1118 06:22:30.942958 10 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1118 06:22:33.945331 10 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 18 06:22:33.974: INFO: Creating new exec pod Nov 18 06:22:39.027: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-1743 execpod-affinityvj8fg -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80' Nov 18 06:22:40.622: INFO: stderr: "I1118 06:22:40.480293 53 log.go:181] (0x4000242dc0) (0x4000b61cc0) Create stream\nI1118 06:22:40.483889 53 log.go:181] (0x4000242dc0) (0x4000b61cc0) Stream added, broadcasting: 1\nI1118 06:22:40.507317 53 log.go:181] (0x4000242dc0) Reply frame received for 1\nI1118 06:22:40.508094 53 log.go:181] (0x4000242dc0) (0x4000b44140) Create stream\nI1118 06:22:40.508213 53 log.go:181] (0x4000242dc0) (0x4000b44140) Stream added, broadcasting: 3\nI1118 06:22:40.509937 53 log.go:181] (0x4000242dc0) Reply frame received for 3\nI1118 06:22:40.510279 53 log.go:181] (0x4000242dc0) (0x4000b45540) Create stream\nI1118 06:22:40.510370 53 log.go:181] (0x4000242dc0) (0x4000b45540) Stream added, broadcasting: 5\nI1118 06:22:40.511676 53 log.go:181] (0x4000242dc0) Reply frame received for 5\nI1118 06:22:40.587157 53 log.go:181] (0x4000242dc0) Data frame received for 5\nI1118 06:22:40.587344 53 log.go:181] (0x4000b45540) (5) Data frame handling\nI1118 06:22:40.587691 53 log.go:181] (0x4000b45540) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-timeout 80\nI1118 06:22:40.599592 53 log.go:181] (0x4000242dc0) Data frame received for 3\nI1118 06:22:40.599684 53 log.go:181] (0x4000b44140) (3) Data frame handling\nI1118 06:22:40.600223 53 log.go:181] (0x4000242dc0) Data frame received for 5\nI1118 06:22:40.600322 53 log.go:181] (0x4000b45540) (5) Data frame handling\nI1118 06:22:40.600413 53 log.go:181] (0x4000b45540) (5) Data frame sent\nI1118 06:22:40.600494 53 log.go:181] (0x4000242dc0) Data frame received for 5\nI1118 06:22:40.600576 53 log.go:181] (0x4000b45540) (5) Data frame handling\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\nI1118 06:22:40.602203 53 log.go:181] (0x4000242dc0) Data frame received for 1\nI1118 06:22:40.602371 53 log.go:181] (0x4000b61cc0) (1) Data frame handling\nI1118 06:22:40.602544 53 log.go:181] (0x4000b61cc0) (1) Data frame sent\nI1118 06:22:40.604206 53 log.go:181] (0x4000242dc0) (0x4000b61cc0) Stream removed, broadcasting: 1\nI1118 06:22:40.607153 53 log.go:181] (0x4000242dc0) Go away received\nI1118 06:22:40.611059 53 log.go:181] (0x4000242dc0) (0x4000b61cc0) Stream removed, broadcasting: 1\nI1118 06:22:40.611449 53 log.go:181] (0x4000242dc0) (0x4000b44140) Stream removed, broadcasting: 3\nI1118 06:22:40.611735 53 log.go:181] (0x4000242dc0) (0x4000b45540) Stream removed, broadcasting: 5\n" Nov 18 06:22:40.624: INFO: stdout: "" Nov 18 06:22:40.632: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-1743 execpod-affinityvj8fg -- /bin/sh -x -c nc -zv -t -w 2 10.107.148.145 80' Nov 18 06:22:42.301: INFO: stderr: "I1118 06:22:42.174927 73 log.go:181] (0x40000b91e0) (0x4000413f40) Create stream\nI1118 06:22:42.178298 73 log.go:181] (0x40000b91e0) (0x4000413f40) Stream added, broadcasting: 1\nI1118 06:22:42.192104 73 log.go:181] (0x40000b91e0) Reply frame received for 1\nI1118 06:22:42.193080 73 log.go:181] (0x40000b91e0) (0x40000dc000) Create stream\nI1118 06:22:42.193197 73 log.go:181] (0x40000b91e0) (0x40000dc000) Stream added, broadcasting: 3\nI1118 06:22:42.194699 73 log.go:181] (0x40000b91e0) Reply frame received for 3\nI1118 06:22:42.195148 73 log.go:181] (0x40000b91e0) (0x40001381e0) Create stream\nI1118 06:22:42.195247 73 log.go:181] (0x40000b91e0) (0x40001381e0) Stream added, broadcasting: 5\nI1118 06:22:42.196498 73 log.go:181] (0x40000b91e0) Reply frame received for 5\nI1118 06:22:42.275391 73 log.go:181] (0x40000b91e0) Data frame received for 3\nI1118 06:22:42.275784 73 log.go:181] (0x40000dc000) (3) Data frame handling\nI1118 06:22:42.276456 73 log.go:181] (0x40000b91e0) Data frame received for 1\nI1118 06:22:42.276690 73 log.go:181] (0x4000413f40) (1) Data frame handling\nI1118 06:22:42.277224 73 log.go:181] (0x40000b91e0) Data frame received for 5\nI1118 06:22:42.277386 73 log.go:181] (0x40001381e0) (5) Data frame handling\n+ nc -zv -t -w 2 10.107.148.145 80\nConnection to 10.107.148.145 80 port [tcp/http] succeeded!\nI1118 06:22:42.283278 73 log.go:181] (0x4000413f40) (1) Data frame sent\nI1118 06:22:42.283737 73 log.go:181] (0x40001381e0) (5) Data frame sent\nI1118 06:22:42.283899 73 log.go:181] (0x40000b91e0) Data frame received for 5\nI1118 06:22:42.283957 73 log.go:181] (0x40001381e0) (5) Data frame handling\nI1118 06:22:42.284688 73 log.go:181] (0x40000b91e0) (0x4000413f40) Stream removed, broadcasting: 1\nI1118 06:22:42.285378 73 log.go:181] (0x40000b91e0) Go away received\nI1118 06:22:42.288828 73 log.go:181] (0x40000b91e0) (0x4000413f40) Stream removed, broadcasting: 1\nI1118 06:22:42.289505 73 log.go:181] (0x40000b91e0) (0x40000dc000) Stream removed, broadcasting: 3\nI1118 06:22:42.290024 73 log.go:181] (0x40000b91e0) (0x40001381e0) Stream removed, broadcasting: 5\n" Nov 18 06:22:42.302: INFO: stdout: "" Nov 18 06:22:42.302: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-1743 execpod-affinityvj8fg -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.18 30233' Nov 18 06:22:44.061: INFO: stderr: "I1118 06:22:43.909564 94 log.go:181] (0x4000d18000) (0x4000d10000) Create stream\nI1118 06:22:43.914248 94 log.go:181] (0x4000d18000) (0x4000d10000) Stream added, broadcasting: 1\nI1118 06:22:43.929294 94 log.go:181] (0x4000d18000) Reply frame received for 1\nI1118 06:22:43.930783 94 log.go:181] (0x4000d18000) (0x4000d86500) Create stream\nI1118 06:22:43.930963 94 log.go:181] (0x4000d18000) (0x4000d86500) Stream added, broadcasting: 3\nI1118 06:22:43.933653 94 log.go:181] (0x4000d18000) Reply frame received for 3\nI1118 06:22:43.934453 94 log.go:181] (0x4000d18000) (0x4000d865a0) Create stream\nI1118 06:22:43.934645 94 log.go:181] (0x4000d18000) (0x4000d865a0) Stream added, broadcasting: 5\nI1118 06:22:43.936204 94 log.go:181] (0x4000d18000) Reply frame received for 5\nI1118 06:22:44.037325 94 log.go:181] (0x4000d18000) Data frame received for 5\nI1118 06:22:44.037648 94 log.go:181] (0x4000d18000) Data frame received for 3\nI1118 06:22:44.037818 94 log.go:181] (0x4000d86500) (3) Data frame handling\nI1118 06:22:44.038113 94 log.go:181] (0x4000d865a0) (5) Data frame handling\nI1118 06:22:44.038462 94 log.go:181] (0x4000d18000) Data frame received for 1\nI1118 06:22:44.038677 94 log.go:181] (0x4000d10000) (1) Data frame handling\n+ nc -zv -t -w 2 172.18.0.18 30233\nI1118 06:22:44.042144 94 log.go:181] (0x4000d10000) (1) Data frame sent\nI1118 06:22:44.042453 94 log.go:181] (0x4000d865a0) (5) Data frame sent\nI1118 06:22:44.042575 94 log.go:181] (0x4000d18000) Data frame received for 5\nI1118 06:22:44.043198 94 log.go:181] (0x4000d18000) (0x4000d10000) Stream removed, broadcasting: 1\nI1118 06:22:44.045614 94 log.go:181] (0x4000d865a0) (5) Data frame handling\nI1118 06:22:44.045707 94 log.go:181] (0x4000d865a0) (5) Data frame sent\nConnection to 172.18.0.18 30233 port [tcp/30233] succeeded!\nI1118 06:22:44.045781 94 log.go:181] (0x4000d18000) Data frame received for 5\nI1118 06:22:44.045847 94 log.go:181] (0x4000d865a0) (5) Data frame handling\nI1118 06:22:44.046552 94 log.go:181] (0x4000d18000) Go away received\nI1118 06:22:44.050006 94 log.go:181] (0x4000d18000) (0x4000d10000) Stream removed, broadcasting: 1\nI1118 06:22:44.050503 94 log.go:181] (0x4000d18000) (0x4000d86500) Stream removed, broadcasting: 3\nI1118 06:22:44.050819 94 log.go:181] (0x4000d18000) (0x4000d865a0) Stream removed, broadcasting: 5\n" Nov 18 06:22:44.062: INFO: stdout: "" Nov 18 06:22:44.062: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-1743 execpod-affinityvj8fg -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.17 30233' Nov 18 06:22:45.699: INFO: stderr: "I1118 06:22:45.557363 115 log.go:181] (0x40002eaf20) (0x4000312460) Create stream\nI1118 06:22:45.560027 115 log.go:181] (0x40002eaf20) (0x4000312460) Stream added, broadcasting: 1\nI1118 06:22:45.574113 115 log.go:181] (0x40002eaf20) Reply frame received for 1\nI1118 06:22:45.575208 115 log.go:181] (0x40002eaf20) (0x400062c000) Create stream\nI1118 06:22:45.575357 115 log.go:181] (0x40002eaf20) (0x400062c000) Stream added, broadcasting: 3\nI1118 06:22:45.577249 115 log.go:181] (0x40002eaf20) Reply frame received for 3\nI1118 06:22:45.577685 115 log.go:181] (0x40002eaf20) (0x4000b9c960) Create stream\nI1118 06:22:45.577806 115 log.go:181] (0x40002eaf20) (0x4000b9c960) Stream added, broadcasting: 5\nI1118 06:22:45.579174 115 log.go:181] (0x40002eaf20) Reply frame received for 5\nI1118 06:22:45.675854 115 log.go:181] (0x40002eaf20) Data frame received for 5\nI1118 06:22:45.676121 115 log.go:181] (0x40002eaf20) Data frame received for 3\nI1118 06:22:45.676329 115 log.go:181] (0x400062c000) (3) Data frame handling\nI1118 06:22:45.676448 115 log.go:181] (0x40002eaf20) Data frame received for 1\nI1118 06:22:45.676512 115 log.go:181] (0x4000312460) (1) Data frame handling\nI1118 06:22:45.676742 115 log.go:181] (0x4000b9c960) (5) Data frame handling\nI1118 06:22:45.677546 115 log.go:181] (0x4000312460) (1) Data frame sent\nI1118 06:22:45.678324 115 log.go:181] (0x4000b9c960) (5) Data frame sent\nI1118 06:22:45.678758 115 log.go:181] (0x40002eaf20) Data frame received for 5\nI1118 06:22:45.678871 115 log.go:181] (0x4000b9c960) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.17 30233\nConnection to 172.18.0.17 30233 port [tcp/30233] succeeded!\nI1118 06:22:45.681780 115 log.go:181] (0x40002eaf20) (0x4000312460) Stream removed, broadcasting: 1\nI1118 06:22:45.685003 115 log.go:181] (0x40002eaf20) Go away received\nI1118 06:22:45.689774 115 log.go:181] (0x40002eaf20) (0x4000312460) Stream removed, broadcasting: 1\nI1118 06:22:45.690195 115 log.go:181] (0x40002eaf20) (0x400062c000) Stream removed, broadcasting: 3\nI1118 06:22:45.690477 115 log.go:181] (0x40002eaf20) (0x4000b9c960) Stream removed, broadcasting: 5\n" Nov 18 06:22:45.701: INFO: stdout: "" Nov 18 06:22:45.702: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-1743 execpod-affinityvj8fg -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.18:30233/ ; done' Nov 18 06:22:47.427: INFO: stderr: "I1118 06:22:47.216473 136 log.go:181] (0x40002d0e70) (0x4000b2e460) Create stream\nI1118 06:22:47.219788 136 log.go:181] (0x40002d0e70) (0x4000b2e460) Stream added, broadcasting: 1\nI1118 06:22:47.233204 136 log.go:181] (0x40002d0e70) Reply frame received for 1\nI1118 06:22:47.233927 136 log.go:181] (0x40002d0e70) (0x4000d1a000) Create stream\nI1118 06:22:47.234018 136 log.go:181] (0x40002d0e70) (0x4000d1a000) Stream added, broadcasting: 3\nI1118 06:22:47.235748 136 log.go:181] (0x40002d0e70) Reply frame received for 3\nI1118 06:22:47.236023 136 log.go:181] (0x40002d0e70) (0x4000b2e500) Create stream\nI1118 06:22:47.236083 136 log.go:181] (0x40002d0e70) (0x4000b2e500) Stream added, broadcasting: 5\nI1118 06:22:47.237174 136 log.go:181] (0x40002d0e70) Reply frame received for 5\nI1118 06:22:47.319333 136 log.go:181] (0x40002d0e70) Data frame received for 3\nI1118 06:22:47.319692 136 log.go:181] (0x40002d0e70) Data frame received for 5\nI1118 06:22:47.319913 136 log.go:181] (0x4000b2e500) (5) Data frame handling\nI1118 06:22:47.319993 136 log.go:181] (0x4000d1a000) (3) Data frame handling\nI1118 06:22:47.320613 136 log.go:181] (0x4000b2e500) (5) Data frame sent\nI1118 06:22:47.320930 136 log.go:181] (0x4000d1a000) (3) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:30233/\nI1118 06:22:47.323393 136 log.go:181] (0x40002d0e70) Data frame received for 3\nI1118 06:22:47.323480 136 log.go:181] (0x4000d1a000) (3) Data frame handling\nI1118 06:22:47.323588 136 log.go:181] (0x4000d1a000) (3) Data frame sent\nI1118 06:22:47.323788 136 log.go:181] (0x40002d0e70) Data frame received for 5\nI1118 06:22:47.323876 136 log.go:181] (0x4000b2e500) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:30233/\nI1118 06:22:47.323953 136 log.go:181] (0x40002d0e70) Data frame received for 3\nI1118 06:22:47.324054 136 log.go:181] (0x4000d1a000) (3) Data frame handling\nI1118 06:22:47.324168 136 log.go:181] (0x4000b2e500) (5) Data frame sent\nI1118 06:22:47.324277 136 log.go:181] (0x4000d1a000) (3) Data frame sent\nI1118 06:22:47.330412 136 log.go:181] (0x40002d0e70) Data frame received for 3\nI1118 06:22:47.330528 136 log.go:181] (0x4000d1a000) (3) Data frame handling\nI1118 06:22:47.330645 136 log.go:181] (0x4000d1a000) (3) Data frame sent\nI1118 06:22:47.330922 136 log.go:181] (0x40002d0e70) Data frame received for 5\nI1118 06:22:47.331019 136 log.go:181] (0x4000b2e500) (5) Data frame handling\nI1118 06:22:47.331135 136 log.go:181] (0x4000b2e500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:30233/\nI1118 06:22:47.331322 136 log.go:181] (0x40002d0e70) Data frame received for 3\nI1118 06:22:47.331396 136 log.go:181] (0x4000d1a000) (3) Data frame handling\nI1118 06:22:47.331476 136 log.go:181] (0x4000d1a000) (3) Data frame sent\nI1118 06:22:47.337267 136 log.go:181] (0x40002d0e70) Data frame received for 3\nI1118 06:22:47.337366 136 log.go:181] (0x4000d1a000) (3) Data frame handling\nI1118 06:22:47.337520 136 log.go:181] (0x4000d1a000) (3) Data frame sent\nI1118 06:22:47.338045 136 log.go:181] (0x40002d0e70) Data frame received for 3\nI1118 06:22:47.338177 136 log.go:181] (0x4000d1a000) (3) Data frame handling\nI1118 06:22:47.338286 136 log.go:181] (0x40002d0e70) Data frame received for 5\nI1118 06:22:47.338402 136 log.go:181] (0x4000b2e500) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:30233/\nI1118 06:22:47.338494 136 log.go:181] (0x4000d1a000) (3) Data frame sent\nI1118 06:22:47.338598 136 log.go:181] (0x4000b2e500) (5) Data frame sent\nI1118 06:22:47.341093 136 log.go:181] (0x40002d0e70) Data frame received for 3\nI1118 06:22:47.341185 136 log.go:181] (0x4000d1a000) (3) Data frame handling\nI1118 06:22:47.341306 136 log.go:181] (0x4000d1a000) (3) Data frame sent\nI1118 06:22:47.341626 136 log.go:181] (0x40002d0e70) Data frame received for 3\nI1118 06:22:47.341759 136 log.go:181] (0x4000d1a000) (3) Data frame handling\nI1118 06:22:47.341906 136 log.go:181] (0x4000d1a000) (3) Data frame sent\nI1118 06:22:47.342025 136 log.go:181] (0x40002d0e70) Data frame received for 5\nI1118 06:22:47.342148 136 log.go:181] (0x4000b2e500) (5) Data frame handling\nI1118 06:22:47.342291 136 log.go:181] (0x4000b2e500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:30233/\nI1118 06:22:47.348774 136 log.go:181] (0x40002d0e70) Data frame received for 3\nI1118 06:22:47.349010 136 log.go:181] (0x4000d1a000) (3) Data frame handling\nI1118 06:22:47.349178 136 log.go:181] (0x4000d1a000) (3) Data frame sent\nI1118 06:22:47.349714 136 log.go:181] (0x40002d0e70) Data frame received for 5\nI1118 06:22:47.349823 136 log.go:181] (0x4000b2e500) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:30233/\nI1118 06:22:47.349944 136 log.go:181] (0x40002d0e70) Data frame received for 3\nI1118 06:22:47.350088 136 log.go:181] (0x4000d1a000) (3) Data frame handling\nI1118 06:22:47.350180 136 log.go:181] (0x4000b2e500) (5) Data frame sent\nI1118 06:22:47.350280 136 log.go:181] (0x4000d1a000) (3) Data frame sent\nI1118 06:22:47.355125 136 log.go:181] (0x40002d0e70) Data frame received for 3\nI1118 06:22:47.355203 136 log.go:181] (0x4000d1a000) (3) Data frame handling\nI1118 06:22:47.355306 136 log.go:181] (0x4000d1a000) (3) Data frame sent\nI1118 06:22:47.355807 136 log.go:181] (0x40002d0e70) Data frame received for 5\nI1118 06:22:47.355880 136 log.go:181] (0x4000b2e500) (5) Data frame handling\nI1118 06:22:47.355943 136 log.go:181] (0x4000b2e500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:30233/\nI1118 06:22:47.355998 136 log.go:181] (0x40002d0e70) Data frame received for 3\nI1118 06:22:47.356053 136 log.go:181] (0x4000d1a000) (3) Data frame handling\nI1118 06:22:47.356119 136 log.go:181] (0x4000d1a000) (3) Data frame sent\nI1118 06:22:47.359648 136 log.go:181] (0x40002d0e70) Data frame received for 3\nI1118 06:22:47.359723 136 log.go:181] (0x4000d1a000) (3) Data frame handling\nI1118 06:22:47.359841 136 log.go:181] (0x4000d1a000) (3) Data frame sent\nI1118 06:22:47.360451 136 log.go:181] (0x40002d0e70) Data frame received for 5\nI1118 06:22:47.360616 136 log.go:181] (0x4000b2e500) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:30233/\nI1118 06:22:47.360742 136 log.go:181] (0x40002d0e70) Data frame received for 3\nI1118 06:22:47.360992 136 log.go:181] (0x4000d1a000) (3) Data frame handling\nI1118 06:22:47.361146 136 log.go:181] (0x4000b2e500) (5) Data frame sent\nI1118 06:22:47.361317 136 log.go:181] (0x4000d1a000) (3) Data frame sent\nI1118 06:22:47.366751 136 log.go:181] (0x40002d0e70) Data frame received for 3\nI1118 06:22:47.366875 136 log.go:181] (0x4000d1a000) (3) Data frame handling\nI1118 06:22:47.367029 136 log.go:181] (0x4000d1a000) (3) Data frame sent\nI1118 06:22:47.367202 136 log.go:181] (0x40002d0e70) Data frame received for 3\nI1118 06:22:47.367352 136 log.go:181] (0x4000d1a000) (3) Data frame handling\nI1118 06:22:47.367428 136 log.go:181] (0x4000d1a000) (3) Data frame sent\nI1118 06:22:47.367492 136 log.go:181] (0x40002d0e70) Data frame received for 5\nI1118 06:22:47.367555 136 log.go:181] (0x4000b2e500) (5) Data frame handling\nI1118 06:22:47.367632 136 log.go:181] (0x4000b2e500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:30233/\nI1118 06:22:47.371284 136 log.go:181] (0x40002d0e70) Data frame received for 3\nI1118 06:22:47.371433 136 log.go:181] (0x4000d1a000) (3) Data frame handling\nI1118 06:22:47.371613 136 log.go:181] (0x4000d1a000) (3) Data frame sent\nI1118 06:22:47.372019 136 log.go:181] (0x40002d0e70) Data frame received for 5\nI1118 06:22:47.372113 136 log.go:181] (0x4000b2e500) (5) Data frame handling\n+ echo\nI1118 06:22:47.372182 136 log.go:181] (0x40002d0e70) Data frame received for 3\nI1118 06:22:47.372282 136 log.go:181] (0x4000d1a000) (3) Data frame handling\nI1118 06:22:47.372363 136 log.go:181] (0x4000d1a000) (3) Data frame sent\nI1118 06:22:47.372493 136 log.go:181] (0x4000b2e500) (5) Data frame sent\nI1118 06:22:47.372645 136 log.go:181] (0x40002d0e70) Data frame received for 5\nI1118 06:22:47.372797 136 log.go:181] (0x4000b2e500) (5) Data frame handling\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:30233/\nI1118 06:22:47.373069 136 log.go:181] (0x4000b2e500) (5) Data frame sent\nI1118 06:22:47.378589 136 log.go:181] (0x40002d0e70) Data frame received for 3\nI1118 06:22:47.378727 136 log.go:181] (0x4000d1a000) (3) Data frame handling\nI1118 06:22:47.378877 136 log.go:181] (0x4000d1a000) (3) Data frame sent\nI1118 06:22:47.379178 136 log.go:181] (0x40002d0e70) Data frame received for 3\nI1118 06:22:47.379255 136 log.go:181] (0x4000d1a000) (3) Data frame handling\nI1118 06:22:47.379343 136 log.go:181] (0x4000d1a000) (3) Data frame sent\nI1118 06:22:47.379409 136 log.go:181] (0x40002d0e70) Data frame received for 5\nI1118 06:22:47.379466 136 log.go:181] (0x4000b2e500) (5) Data frame handling\nI1118 06:22:47.379549 136 log.go:181] (0x4000b2e500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:30233/\nI1118 06:22:47.383613 136 log.go:181] (0x40002d0e70) Data frame received for 3\nI1118 06:22:47.383720 136 log.go:181] (0x4000d1a000) (3) Data frame handling\nI1118 06:22:47.383828 136 log.go:181] (0x4000d1a000) (3) Data frame sent\nI1118 06:22:47.384355 136 log.go:181] (0x40002d0e70) Data frame received for 5\nI1118 06:22:47.384443 136 log.go:181] (0x4000b2e500) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:30233/\nI1118 06:22:47.384544 136 log.go:181] (0x40002d0e70) Data frame received for 3\nI1118 06:22:47.384654 136 log.go:181] (0x4000d1a000) (3) Data frame handling\nI1118 06:22:47.384744 136 log.go:181] (0x4000d1a000) (3) Data frame sent\nI1118 06:22:47.384819 136 log.go:181] (0x4000b2e500) (5) Data frame sent\nI1118 06:22:47.389483 136 log.go:181] (0x40002d0e70) Data frame received for 3\nI1118 06:22:47.389568 136 log.go:181] (0x4000d1a000) (3) Data frame handling\nI1118 06:22:47.389661 136 log.go:181] (0x4000d1a000) (3) Data frame sent\nI1118 06:22:47.390380 136 log.go:181] (0x40002d0e70) Data frame received for 3\nI1118 06:22:47.390453 136 log.go:181] (0x4000d1a000) (3) Data frame handling\nI1118 06:22:47.390515 136 log.go:181] (0x4000d1a000) (3) Data frame sent\nI1118 06:22:47.390584 136 log.go:181] (0x40002d0e70) Data frame received for 5\nI1118 06:22:47.390647 136 log.go:181] (0x4000b2e500) (5) Data frame handling\nI1118 06:22:47.390737 136 log.go:181] (0x4000b2e500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:30233/\nI1118 06:22:47.395015 136 log.go:181] (0x40002d0e70) Data frame received for 3\nI1118 06:22:47.395081 136 log.go:181] (0x4000d1a000) (3) Data frame handling\nI1118 06:22:47.395165 136 log.go:181] (0x4000d1a000) (3) Data frame sent\nI1118 06:22:47.395830 136 log.go:181] (0x40002d0e70) Data frame received for 5\nI1118 06:22:47.395926 136 log.go:181] (0x4000b2e500) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2I1118 06:22:47.395996 136 log.go:181] (0x40002d0e70) Data frame received for 3\nI1118 06:22:47.396074 136 log.go:181] (0x4000d1a000) (3) Data frame handling\nI1118 06:22:47.396146 136 log.go:181] (0x4000b2e500) (5) Data frame sent\nI1118 06:22:47.396225 136 log.go:181] (0x40002d0e70) Data frame received for 5\nI1118 06:22:47.396308 136 log.go:181] (0x4000d1a000) (3) Data frame sent\nI1118 06:22:47.396384 136 log.go:181] (0x4000b2e500) (5) Data frame handling\nI1118 06:22:47.396452 136 log.go:181] (0x4000b2e500) (5) Data frame sent\n http://172.18.0.18:30233/\nI1118 06:22:47.400615 136 log.go:181] (0x40002d0e70) Data frame received for 3\nI1118 06:22:47.400698 136 log.go:181] (0x4000d1a000) (3) Data frame handling\nI1118 06:22:47.400786 136 log.go:181] (0x4000d1a000) (3) Data frame sent\nI1118 06:22:47.401507 136 log.go:181] (0x40002d0e70) Data frame received for 3\nI1118 06:22:47.401586 136 log.go:181] (0x4000d1a000) (3) Data frame handling\nI1118 06:22:47.401650 136 log.go:181] (0x4000d1a000) (3) Data frame sent\nI1118 06:22:47.401713 136 log.go:181] (0x40002d0e70) Data frame received for 5\nI1118 06:22:47.401797 136 log.go:181] (0x4000b2e500) (5) Data frame handling\nI1118 06:22:47.401876 136 log.go:181] (0x4000b2e500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:30233/\nI1118 06:22:47.405324 136 log.go:181] (0x40002d0e70) Data frame received for 3\nI1118 06:22:47.405452 136 log.go:181] (0x4000d1a000) (3) Data frame handling\nI1118 06:22:47.405590 136 log.go:181] (0x4000d1a000) (3) Data frame sent\nI1118 06:22:47.406275 136 log.go:181] (0x40002d0e70) Data frame received for 5\nI1118 06:22:47.406380 136 log.go:181] (0x4000b2e500) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:30233/\nI1118 06:22:47.406479 136 log.go:181] (0x40002d0e70) Data frame received for 3\nI1118 06:22:47.406589 136 log.go:181] (0x4000d1a000) (3) Data frame handling\nI1118 06:22:47.406687 136 log.go:181] (0x4000b2e500) (5) Data frame sent\nI1118 06:22:47.406791 136 log.go:181] (0x4000d1a000) (3) Data frame sent\nI1118 06:22:47.409783 136 log.go:181] (0x40002d0e70) Data frame received for 3\nI1118 06:22:47.409902 136 log.go:181] (0x4000d1a000) (3) Data frame handling\nI1118 06:22:47.410035 136 log.go:181] (0x4000d1a000) (3) Data frame sent\nI1118 06:22:47.410286 136 log.go:181] (0x40002d0e70) Data frame received for 5\nI1118 06:22:47.410358 136 log.go:181] (0x4000b2e500) (5) Data frame handling\nI1118 06:22:47.410544 136 log.go:181] (0x40002d0e70) Data frame received for 3\nI1118 06:22:47.410619 136 log.go:181] (0x4000d1a000) (3) Data frame handling\nI1118 06:22:47.412316 136 log.go:181] (0x40002d0e70) Data frame received for 1\nI1118 06:22:47.412381 136 log.go:181] (0x4000b2e460) (1) Data frame handling\nI1118 06:22:47.412454 136 log.go:181] (0x4000b2e460) (1) Data frame sent\nI1118 06:22:47.413460 136 log.go:181] (0x40002d0e70) (0x4000b2e460) Stream removed, broadcasting: 1\nI1118 06:22:47.416107 136 log.go:181] (0x40002d0e70) Go away received\nI1118 06:22:47.418476 136 log.go:181] (0x40002d0e70) (0x4000b2e460) Stream removed, broadcasting: 1\nI1118 06:22:47.419083 136 log.go:181] (0x40002d0e70) (0x4000d1a000) Stream removed, broadcasting: 3\nI1118 06:22:47.419291 136 log.go:181] (0x40002d0e70) (0x4000b2e500) Stream removed, broadcasting: 5\n" Nov 18 06:22:47.432: INFO: stdout: "\naffinity-nodeport-timeout-lq9bc\naffinity-nodeport-timeout-lq9bc\naffinity-nodeport-timeout-lq9bc\naffinity-nodeport-timeout-lq9bc\naffinity-nodeport-timeout-lq9bc\naffinity-nodeport-timeout-lq9bc\naffinity-nodeport-timeout-lq9bc\naffinity-nodeport-timeout-lq9bc\naffinity-nodeport-timeout-lq9bc\naffinity-nodeport-timeout-lq9bc\naffinity-nodeport-timeout-lq9bc\naffinity-nodeport-timeout-lq9bc\naffinity-nodeport-timeout-lq9bc\naffinity-nodeport-timeout-lq9bc\naffinity-nodeport-timeout-lq9bc\naffinity-nodeport-timeout-lq9bc" Nov 18 06:22:47.433: INFO: Received response from host: affinity-nodeport-timeout-lq9bc Nov 18 06:22:47.433: INFO: Received response from host: affinity-nodeport-timeout-lq9bc Nov 18 06:22:47.433: INFO: Received response from host: affinity-nodeport-timeout-lq9bc Nov 18 06:22:47.433: INFO: Received response from host: affinity-nodeport-timeout-lq9bc Nov 18 06:22:47.433: INFO: Received response from host: affinity-nodeport-timeout-lq9bc Nov 18 06:22:47.434: INFO: Received response from host: affinity-nodeport-timeout-lq9bc Nov 18 06:22:47.434: INFO: Received response from host: affinity-nodeport-timeout-lq9bc Nov 18 06:22:47.434: INFO: Received response from host: affinity-nodeport-timeout-lq9bc Nov 18 06:22:47.434: INFO: Received response from host: affinity-nodeport-timeout-lq9bc Nov 18 06:22:47.434: INFO: Received response from host: affinity-nodeport-timeout-lq9bc Nov 18 06:22:47.434: INFO: Received response from host: affinity-nodeport-timeout-lq9bc Nov 18 06:22:47.434: INFO: Received response from host: affinity-nodeport-timeout-lq9bc Nov 18 06:22:47.434: INFO: Received response from host: affinity-nodeport-timeout-lq9bc Nov 18 06:22:47.434: INFO: Received response from host: affinity-nodeport-timeout-lq9bc Nov 18 06:22:47.434: INFO: Received response from host: affinity-nodeport-timeout-lq9bc Nov 18 06:22:47.434: INFO: Received response from host: affinity-nodeport-timeout-lq9bc Nov 18 06:22:47.435: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-1743 execpod-affinityvj8fg -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.18:30233/' Nov 18 06:22:49.037: INFO: stderr: "I1118 06:22:48.898475 156 log.go:181] (0x40005b1080) (0x40005a85a0) Create stream\nI1118 06:22:48.901898 156 log.go:181] (0x40005b1080) (0x40005a85a0) Stream added, broadcasting: 1\nI1118 06:22:48.913699 156 log.go:181] (0x40005b1080) Reply frame received for 1\nI1118 06:22:48.914300 156 log.go:181] (0x40005b1080) (0x4000bb8280) Create stream\nI1118 06:22:48.914363 156 log.go:181] (0x40005b1080) (0x4000bb8280) Stream added, broadcasting: 3\nI1118 06:22:48.915984 156 log.go:181] (0x40005b1080) Reply frame received for 3\nI1118 06:22:48.916437 156 log.go:181] (0x40005b1080) (0x4000bc2d20) Create stream\nI1118 06:22:48.916527 156 log.go:181] (0x40005b1080) (0x4000bc2d20) Stream added, broadcasting: 5\nI1118 06:22:48.918573 156 log.go:181] (0x40005b1080) Reply frame received for 5\nI1118 06:22:49.009253 156 log.go:181] (0x40005b1080) Data frame received for 5\nI1118 06:22:49.009535 156 log.go:181] (0x4000bc2d20) (5) Data frame handling\nI1118 06:22:49.010191 156 log.go:181] (0x4000bc2d20) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:30233/\nI1118 06:22:49.014115 156 log.go:181] (0x40005b1080) Data frame received for 3\nI1118 06:22:49.014246 156 log.go:181] (0x4000bb8280) (3) Data frame handling\nI1118 06:22:49.014376 156 log.go:181] (0x4000bb8280) (3) Data frame sent\nI1118 06:22:49.015346 156 log.go:181] (0x40005b1080) Data frame received for 3\nI1118 06:22:49.015441 156 log.go:181] (0x4000bb8280) (3) Data frame handling\nI1118 06:22:49.015836 156 log.go:181] (0x40005b1080) Data frame received for 5\nI1118 06:22:49.016020 156 log.go:181] (0x4000bc2d20) (5) Data frame handling\nI1118 06:22:49.017454 156 log.go:181] (0x40005b1080) Data frame received for 1\nI1118 06:22:49.017533 156 log.go:181] (0x40005a85a0) (1) Data frame handling\nI1118 06:22:49.017613 156 log.go:181] (0x40005a85a0) (1) Data frame sent\nI1118 06:22:49.019467 156 log.go:181] (0x40005b1080) (0x40005a85a0) Stream removed, broadcasting: 1\nI1118 06:22:49.021662 156 log.go:181] (0x40005b1080) Go away received\nI1118 06:22:49.026658 156 log.go:181] (0x40005b1080) (0x40005a85a0) Stream removed, broadcasting: 1\nI1118 06:22:49.027085 156 log.go:181] (0x40005b1080) (0x4000bb8280) Stream removed, broadcasting: 3\nI1118 06:22:49.027372 156 log.go:181] (0x40005b1080) (0x4000bc2d20) Stream removed, broadcasting: 5\n" Nov 18 06:22:49.038: INFO: stdout: "affinity-nodeport-timeout-lq9bc" Nov 18 06:23:04.039: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-1743 execpod-affinityvj8fg -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.18:30233/' Nov 18 06:23:05.706: INFO: stderr: "I1118 06:23:05.583734 176 log.go:181] (0x400014c370) (0x4000b315e0) Create stream\nI1118 06:23:05.586165 176 log.go:181] (0x400014c370) (0x4000b315e0) Stream added, broadcasting: 1\nI1118 06:23:05.597670 176 log.go:181] (0x400014c370) Reply frame received for 1\nI1118 06:23:05.598275 176 log.go:181] (0x400014c370) (0x4000b4c460) Create stream\nI1118 06:23:05.598337 176 log.go:181] (0x400014c370) (0x4000b4c460) Stream added, broadcasting: 3\nI1118 06:23:05.599827 176 log.go:181] (0x400014c370) Reply frame received for 3\nI1118 06:23:05.600166 176 log.go:181] (0x400014c370) (0x40009f6000) Create stream\nI1118 06:23:05.600256 176 log.go:181] (0x400014c370) (0x40009f6000) Stream added, broadcasting: 5\nI1118 06:23:05.601634 176 log.go:181] (0x400014c370) Reply frame received for 5\nI1118 06:23:05.685356 176 log.go:181] (0x400014c370) Data frame received for 5\nI1118 06:23:05.685577 176 log.go:181] (0x40009f6000) (5) Data frame handling\nI1118 06:23:05.685942 176 log.go:181] (0x40009f6000) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:30233/\nI1118 06:23:05.688462 176 log.go:181] (0x400014c370) Data frame received for 3\nI1118 06:23:05.688526 176 log.go:181] (0x4000b4c460) (3) Data frame handling\nI1118 06:23:05.688606 176 log.go:181] (0x4000b4c460) (3) Data frame sent\nI1118 06:23:05.689098 176 log.go:181] (0x400014c370) Data frame received for 3\nI1118 06:23:05.689171 176 log.go:181] (0x4000b4c460) (3) Data frame handling\nI1118 06:23:05.689684 176 log.go:181] (0x400014c370) Data frame received for 5\nI1118 06:23:05.689752 176 log.go:181] (0x40009f6000) (5) Data frame handling\nI1118 06:23:05.690699 176 log.go:181] (0x400014c370) Data frame received for 1\nI1118 06:23:05.690828 176 log.go:181] (0x4000b315e0) (1) Data frame handling\nI1118 06:23:05.690950 176 log.go:181] (0x4000b315e0) (1) Data frame sent\nI1118 06:23:05.692339 176 log.go:181] (0x400014c370) (0x4000b315e0) Stream removed, broadcasting: 1\nI1118 06:23:05.694338 176 log.go:181] (0x400014c370) Go away received\nI1118 06:23:05.697245 176 log.go:181] (0x400014c370) (0x4000b315e0) Stream removed, broadcasting: 1\nI1118 06:23:05.697501 176 log.go:181] (0x400014c370) (0x4000b4c460) Stream removed, broadcasting: 3\nI1118 06:23:05.697679 176 log.go:181] (0x400014c370) (0x40009f6000) Stream removed, broadcasting: 5\n" Nov 18 06:23:05.707: INFO: stdout: "affinity-nodeport-timeout-k2xtb" Nov 18 06:23:05.708: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-1743, will wait for the garbage collector to delete the pods Nov 18 06:23:05.913: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 66.086945ms Nov 18 06:23:06.415: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 502.261656ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:23:20.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1743" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:67.694 seconds] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":303,"completed":13,"skipped":259,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:23:20.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should create and stop a working application [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating all guestbook components Nov 18 06:23:20.526: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend Nov 18 06:23:20.527: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8993' Nov 18 06:23:23.137: INFO: stderr: "" Nov 18 06:23:23.137: INFO: stdout: "service/agnhost-replica created\n" Nov 18 06:23:23.138: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend Nov 18 06:23:23.140: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8993' Nov 18 06:23:26.293: INFO: stderr: "" Nov 18 06:23:26.294: INFO: stdout: "service/agnhost-primary created\n" Nov 18 06:23:26.295: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Nov 18 06:23:26.295: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8993' Nov 18 06:23:29.027: INFO: stderr: "" Nov 18 06:23:29.028: INFO: stdout: "service/frontend created\n" Nov 18 06:23:29.032: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Nov 18 06:23:29.032: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8993' Nov 18 06:23:31.524: INFO: stderr: "" Nov 18 06:23:31.524: INFO: stdout: "deployment.apps/frontend created\n" Nov 18 06:23:31.526: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Nov 18 06:23:31.526: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8993' Nov 18 06:23:35.481: INFO: stderr: "" Nov 18 06:23:35.482: INFO: stdout: "deployment.apps/agnhost-primary created\n" Nov 18 06:23:35.484: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Nov 18 06:23:35.484: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8993' Nov 18 06:23:38.814: INFO: stderr: "" Nov 18 06:23:38.814: INFO: stdout: "deployment.apps/agnhost-replica created\n" STEP: validating guestbook app Nov 18 06:23:38.814: INFO: Waiting for all frontend pods to be Running. Nov 18 06:23:38.866: INFO: Waiting for frontend to serve content. Nov 18 06:23:39.937: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: Nov 18 06:23:44.955: INFO: Trying to add a new entry to the guestbook. Nov 18 06:23:44.969: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Nov 18 06:23:44.978: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8993' Nov 18 06:23:46.440: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Nov 18 06:23:46.440: INFO: stdout: "service \"agnhost-replica\" force deleted\n" STEP: using delete to clean up resources Nov 18 06:23:46.442: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8993' Nov 18 06:23:47.875: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Nov 18 06:23:47.875: INFO: stdout: "service \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Nov 18 06:23:47.876: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8993' Nov 18 06:23:49.292: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Nov 18 06:23:49.293: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Nov 18 06:23:49.294: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8993' Nov 18 06:23:50.661: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Nov 18 06:23:50.661: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Nov 18 06:23:50.662: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8993' Nov 18 06:23:52.236: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Nov 18 06:23:52.236: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Nov 18 06:23:52.237: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8993' Nov 18 06:23:53.672: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Nov 18 06:23:53.672: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:23:53.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8993" for this suite. • [SLOW TEST:33.830 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:351 should create and stop a working application [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":303,"completed":14,"skipped":324,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:23:54.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 18 06:23:57.149: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 18 06:23:59.217: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741277437, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741277437, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741277437, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741277437, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 18 06:24:01.794: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741277437, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741277437, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741277437, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741277437, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 18 06:24:03.224: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741277437, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741277437, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741277437, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741277437, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 18 06:24:06.260: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:24:06.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3593" for this suite. STEP: Destroying namespace "webhook-3593-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.292 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":303,"completed":15,"skipped":386,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:24:06.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 18 06:24:10.195: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 18 06:24:12.358: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741277450, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741277450, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741277450, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741277450, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 18 06:24:14.367: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741277450, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741277450, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741277450, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741277450, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 18 06:24:17.421: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 18 06:24:17.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-895-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:24:18.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8909" for this suite. STEP: Destroying namespace "webhook-8909-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.467 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":303,"completed":16,"skipped":412,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:24:19.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Nov 18 06:24:19.161: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a232c573-8d25-4869-a660-9bd31755a6c4" in namespace "projected-4360" to be "Succeeded or Failed" Nov 18 06:24:19.181: INFO: Pod "downwardapi-volume-a232c573-8d25-4869-a660-9bd31755a6c4": Phase="Pending", Reason="", readiness=false. Elapsed: 19.478813ms Nov 18 06:24:21.188: INFO: Pod "downwardapi-volume-a232c573-8d25-4869-a660-9bd31755a6c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026783151s Nov 18 06:24:23.195: INFO: Pod "downwardapi-volume-a232c573-8d25-4869-a660-9bd31755a6c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033687051s STEP: Saw pod success Nov 18 06:24:23.195: INFO: Pod "downwardapi-volume-a232c573-8d25-4869-a660-9bd31755a6c4" satisfied condition "Succeeded or Failed" Nov 18 06:24:23.198: INFO: Trying to get logs from node leguer-worker2 pod downwardapi-volume-a232c573-8d25-4869-a660-9bd31755a6c4 container client-container: STEP: delete the pod Nov 18 06:24:23.412: INFO: Waiting for pod downwardapi-volume-a232c573-8d25-4869-a660-9bd31755a6c4 to disappear Nov 18 06:24:23.444: INFO: Pod downwardapi-volume-a232c573-8d25-4869-a660-9bd31755a6c4 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:24:23.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4360" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":303,"completed":17,"skipped":457,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:24:23.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] deployment should support proportional scaling [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 18 06:24:23.549: INFO: Creating deployment "webserver-deployment" Nov 18 06:24:23.575: INFO: Waiting for observed generation 1 Nov 18 06:24:25.590: INFO: Waiting for all required pods to come up Nov 18 06:24:25.603: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Nov 18 06:24:37.622: INFO: Waiting for deployment "webserver-deployment" to complete Nov 18 06:24:37.631: INFO: Updating deployment "webserver-deployment" with a non-existent image Nov 18 06:24:37.646: INFO: Updating deployment webserver-deployment Nov 18 06:24:37.647: INFO: Waiting for observed generation 2 Nov 18 06:24:39.686: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Nov 18 06:24:39.694: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Nov 18 06:24:39.701: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Nov 18 06:24:39.714: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Nov 18 06:24:39.714: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Nov 18 06:24:39.718: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Nov 18 06:24:39.724: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Nov 18 06:24:39.725: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Nov 18 06:24:39.736: INFO: Updating deployment webserver-deployment Nov 18 06:24:39.737: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Nov 18 06:24:40.494: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Nov 18 06:24:42.706: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Nov 18 06:24:42.853: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-6406 /apis/apps/v1/namespaces/deployment-6406/deployments/webserver-deployment e9ceea14-016e-4a5e-9b0d-386dcb9c9ce2 11981012 3 2020-11-18 06:24:23 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-11-18 06:24:39 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-11-18 06:24:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x4002513c88 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-11-18 06:24:39 +0000 UTC,LastTransitionTime:2020-11-18 06:24:39 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-795d758f88" is progressing.,LastUpdateTime:2020-11-18 06:24:41 +0000 UTC,LastTransitionTime:2020-11-18 06:24:23 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Nov 18 06:24:42.865: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88 deployment-6406 /apis/apps/v1/namespaces/deployment-6406/replicasets/webserver-deployment-795d758f88 fe87d956-fb34-4c30-a688-5f7ff70e53d0 11981008 3 2020-11-18 06:24:37 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment e9ceea14-016e-4a5e-9b0d-386dcb9c9ce2 0x4001b840d7 0x4001b840d8}] [] [{kube-controller-manager Update apps/v1 2020-11-18 06:24:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9ceea14-016e-4a5e-9b0d-386dcb9c9ce2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x4001b84158 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Nov 18 06:24:42.866: INFO: All old ReplicaSets of Deployment "webserver-deployment": Nov 18 06:24:42.866: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-dd94f59b7 deployment-6406 /apis/apps/v1/namespaces/deployment-6406/replicasets/webserver-deployment-dd94f59b7 ba3ab135-9ff7-432d-8bf0-94cfc71dd36b 11980983 3 2020-11-18 06:24:23 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment e9ceea14-016e-4a5e-9b0d-386dcb9c9ce2 0x4001b841b7 0x4001b841b8}] [] [{kube-controller-manager Update apps/v1 2020-11-18 06:24:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9ceea14-016e-4a5e-9b0d-386dcb9c9ce2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: dd94f59b7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x4001b84228 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Nov 18 06:24:43.050: INFO: Pod "webserver-deployment-795d758f88-4cgbr" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-4cgbr webserver-deployment-795d758f88- deployment-6406 /api/v1/namespaces/deployment-6406/pods/webserver-deployment-795d758f88-4cgbr 12282f8e-4318-4235-ba82-9ad21eaf006d 11981052 0 2020-11-18 06:24:41 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 fe87d956-fb34-4c30-a688-5f7ff70e53d0 0x40025d1317 0x40025d1318}] [] [{kube-controller-manager Update v1 2020-11-18 06:24:41 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe87d956-fb34-4c30-a688-5f7ff70e53d0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-11-18 06:24:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9dvtf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9dvtf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9dvtf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.17,PodIP:,StartTime:2020-11-18 06:24:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 18 06:24:43.051: INFO: Pod "webserver-deployment-795d758f88-69wsq" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-69wsq webserver-deployment-795d758f88- deployment-6406 /api/v1/namespaces/deployment-6406/pods/webserver-deployment-795d758f88-69wsq 8e3aeba0-586c-435f-92c6-62f56c8c2b08 11981015 0 2020-11-18 06:24:40 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 fe87d956-fb34-4c30-a688-5f7ff70e53d0 0x40025d14c7 0x40025d14c8}] [] [{kube-controller-manager Update v1 2020-11-18 06:24:40 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe87d956-fb34-4c30-a688-5f7ff70e53d0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-11-18 06:24:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9dvtf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9dvtf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9dvtf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.18,PodIP:,StartTime:2020-11-18 06:24:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 18 06:24:43.052: INFO: Pod "webserver-deployment-795d758f88-b95f2" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-b95f2 webserver-deployment-795d758f88- deployment-6406 /api/v1/namespaces/deployment-6406/pods/webserver-deployment-795d758f88-b95f2 1399b12f-d6d9-44b5-92c4-fdb7c226c95d 11980905 0 2020-11-18 06:24:37 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 fe87d956-fb34-4c30-a688-5f7ff70e53d0 0x40025d1677 0x40025d1678}] [] [{kube-controller-manager Update v1 2020-11-18 06:24:37 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe87d956-fb34-4c30-a688-5f7ff70e53d0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-11-18 06:24:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9dvtf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9dvtf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9dvtf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.17,PodIP:,StartTime:2020-11-18 06:24:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 18 06:24:43.053: INFO: Pod "webserver-deployment-795d758f88-k5hfr" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-k5hfr webserver-deployment-795d758f88- deployment-6406 /api/v1/namespaces/deployment-6406/pods/webserver-deployment-795d758f88-k5hfr bfdad737-6085-4882-beb5-23fda9f3e4dc 11981035 0 2020-11-18 06:24:40 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 fe87d956-fb34-4c30-a688-5f7ff70e53d0 0x40025d1827 0x40025d1828}] [] [{kube-controller-manager Update v1 2020-11-18 06:24:40 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe87d956-fb34-4c30-a688-5f7ff70e53d0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-11-18 06:24:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9dvtf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9dvtf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9dvtf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.17,PodIP:,StartTime:2020-11-18 06:24:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 18 06:24:43.055: INFO: Pod "webserver-deployment-795d758f88-l8kbk" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-l8kbk webserver-deployment-795d758f88- deployment-6406 /api/v1/namespaces/deployment-6406/pods/webserver-deployment-795d758f88-l8kbk f94ebbcb-f11c-4c62-a657-ecedd5f00cfa 11980960 0 2020-11-18 06:24:39 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 fe87d956-fb34-4c30-a688-5f7ff70e53d0 0x40025d19d7 0x40025d19d8}] [] [{kube-controller-manager Update v1 2020-11-18 06:24:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe87d956-fb34-4c30-a688-5f7ff70e53d0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-11-18 06:24:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9dvtf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9dvtf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9dvtf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.17,PodIP:,StartTime:2020-11-18 06:24:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 18 06:24:43.056: INFO: Pod "webserver-deployment-795d758f88-m6jh5" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-m6jh5 webserver-deployment-795d758f88- deployment-6406 /api/v1/namespaces/deployment-6406/pods/webserver-deployment-795d758f88-m6jh5 c517e30b-6c34-4800-bf38-3f2177c4bd8d 11980909 0 2020-11-18 06:24:37 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 fe87d956-fb34-4c30-a688-5f7ff70e53d0 0x40025d1b87 0x40025d1b88}] [] [{kube-controller-manager Update v1 2020-11-18 06:24:37 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe87d956-fb34-4c30-a688-5f7ff70e53d0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-11-18 06:24:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9dvtf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9dvtf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9dvtf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.18,PodIP:,StartTime:2020-11-18 06:24:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 18 06:24:43.058: INFO: Pod "webserver-deployment-795d758f88-pvslt" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-pvslt webserver-deployment-795d758f88- deployment-6406 /api/v1/namespaces/deployment-6406/pods/webserver-deployment-795d758f88-pvslt 433312f9-b32c-485b-864a-32614c6ace6c 11981027 0 2020-11-18 06:24:40 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 fe87d956-fb34-4c30-a688-5f7ff70e53d0 0x40025d1d37 0x40025d1d38}] [] [{kube-controller-manager Update v1 2020-11-18 06:24:40 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe87d956-fb34-4c30-a688-5f7ff70e53d0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-11-18 06:24:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9dvtf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9dvtf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9dvtf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.18,PodIP:,StartTime:2020-11-18 06:24:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 18 06:24:43.059: INFO: Pod "webserver-deployment-795d758f88-rqvlh" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-rqvlh webserver-deployment-795d758f88- deployment-6406 /api/v1/namespaces/deployment-6406/pods/webserver-deployment-795d758f88-rqvlh 791e5656-5da2-43de-a313-b05b515d828c 11981043 0 2020-11-18 06:24:40 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 fe87d956-fb34-4c30-a688-5f7ff70e53d0 0x40025d1ee7 0x40025d1ee8}] [] [{kube-controller-manager Update v1 2020-11-18 06:24:40 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe87d956-fb34-4c30-a688-5f7ff70e53d0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-11-18 06:24:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9dvtf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9dvtf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9dvtf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.17,PodIP:,StartTime:2020-11-18 06:24:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 18 06:24:43.060: INFO: Pod "webserver-deployment-795d758f88-tr88f" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-tr88f webserver-deployment-795d758f88- deployment-6406 /api/v1/namespaces/deployment-6406/pods/webserver-deployment-795d758f88-tr88f 5d8a27bf-6ed2-4b18-95fc-0cb8f431705b 11981053 0 2020-11-18 06:24:40 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 fe87d956-fb34-4c30-a688-5f7ff70e53d0 0x4000730097 0x4000730098}] [] [{kube-controller-manager Update v1 2020-11-18 06:24:40 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe87d956-fb34-4c30-a688-5f7ff70e53d0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-11-18 06:24:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9dvtf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9dvtf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9dvtf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.18,PodIP:,StartTime:2020-11-18 06:24:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 18 06:24:43.062: INFO: Pod "webserver-deployment-795d758f88-vdnnj" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-vdnnj webserver-deployment-795d758f88- deployment-6406 /api/v1/namespaces/deployment-6406/pods/webserver-deployment-795d758f88-vdnnj a7173eef-c3d1-42a7-9402-36047d9f7717 11980890 0 2020-11-18 06:24:37 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 fe87d956-fb34-4c30-a688-5f7ff70e53d0 0x4000730247 0x4000730248}] [] [{kube-controller-manager Update v1 2020-11-18 06:24:37 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe87d956-fb34-4c30-a688-5f7ff70e53d0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-11-18 06:24:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9dvtf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9dvtf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9dvtf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.18,PodIP:,StartTime:2020-11-18 06:24:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 18 06:24:43.063: INFO: Pod "webserver-deployment-795d758f88-xjpwc" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-xjpwc webserver-deployment-795d758f88- deployment-6406 /api/v1/namespaces/deployment-6406/pods/webserver-deployment-795d758f88-xjpwc ff6f5a5c-3770-4ad0-b647-7e8d0ef312ea 11981009 0 2020-11-18 06:24:40 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 fe87d956-fb34-4c30-a688-5f7ff70e53d0 0x4000730407 0x4000730408}] [] [{kube-controller-manager Update v1 2020-11-18 06:24:40 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe87d956-fb34-4c30-a688-5f7ff70e53d0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-11-18 06:24:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9dvtf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9dvtf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9dvtf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.17,PodIP:,StartTime:2020-11-18 06:24:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 18 06:24:43.065: INFO: Pod "webserver-deployment-795d758f88-z6492" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-z6492 webserver-deployment-795d758f88- deployment-6406 /api/v1/namespaces/deployment-6406/pods/webserver-deployment-795d758f88-z6492 0c19ed61-776d-46b9-8432-40c4d46fb6cb 11980879 0 2020-11-18 06:24:37 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 fe87d956-fb34-4c30-a688-5f7ff70e53d0 0x40007305b7 0x40007305b8}] [] [{kube-controller-manager Update v1 2020-11-18 06:24:37 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe87d956-fb34-4c30-a688-5f7ff70e53d0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-11-18 06:24:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9dvtf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9dvtf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9dvtf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.18,PodIP:,StartTime:2020-11-18 06:24:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 18 06:24:43.066: INFO: Pod "webserver-deployment-795d758f88-zplqd" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-zplqd webserver-deployment-795d758f88- deployment-6406 /api/v1/namespaces/deployment-6406/pods/webserver-deployment-795d758f88-zplqd aca30968-db10-43a7-b9a9-9f4d0deb50b5 11980882 0 2020-11-18 06:24:37 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 fe87d956-fb34-4c30-a688-5f7ff70e53d0 0x4000730767 0x4000730768}] [] [{kube-controller-manager Update v1 2020-11-18 06:24:37 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe87d956-fb34-4c30-a688-5f7ff70e53d0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-11-18 06:24:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9dvtf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9dvtf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9dvtf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.17,PodIP:,StartTime:2020-11-18 06:24:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 18 06:24:43.067: INFO: Pod "webserver-deployment-dd94f59b7-5fg94" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-5fg94 webserver-deployment-dd94f59b7- deployment-6406 /api/v1/namespaces/deployment-6406/pods/webserver-deployment-dd94f59b7-5fg94 d36119b3-813d-4ff3-afc0-5f4e214dbe8e 11980961 0 2020-11-18 06:24:39 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ba3ab135-9ff7-432d-8bf0-94cfc71dd36b 0x4000730917 0x4000730918}] [] [{kube-controller-manager Update v1 2020-11-18 06:24:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba3ab135-9ff7-432d-8bf0-94cfc71dd36b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-11-18 06:24:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9dvtf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9dvtf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9dvtf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.18,PodIP:,StartTime:2020-11-18 06:24:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 18 06:24:43.068: INFO: Pod "webserver-deployment-dd94f59b7-bc6cb" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-bc6cb webserver-deployment-dd94f59b7- deployment-6406 /api/v1/namespaces/deployment-6406/pods/webserver-deployment-dd94f59b7-bc6cb 8efaf601-fde0-4bb6-ba7e-9deebb271f92 11981022 0 2020-11-18 06:24:40 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ba3ab135-9ff7-432d-8bf0-94cfc71dd36b 0x4000730aa7 0x4000730aa8}] [] [{kube-controller-manager Update v1 2020-11-18 06:24:40 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba3ab135-9ff7-432d-8bf0-94cfc71dd36b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-11-18 06:24:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9dvtf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9dvtf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9dvtf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.18,PodIP:,StartTime:2020-11-18 06:24:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 18 06:24:43.070: INFO: Pod "webserver-deployment-dd94f59b7-bpr7m" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-bpr7m webserver-deployment-dd94f59b7- deployment-6406 /api/v1/namespaces/deployment-6406/pods/webserver-deployment-dd94f59b7-bpr7m c6cac7b8-2d04-4588-a485-5ff48890f18a 11980835 0 2020-11-18 06:24:23 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ba3ab135-9ff7-432d-8bf0-94cfc71dd36b 0x4000730c37 0x4000730c38}] [] [{kube-controller-manager Update v1 2020-11-18 06:24:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba3ab135-9ff7-432d-8bf0-94cfc71dd36b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-11-18 06:24:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.73\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9dvtf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9dvtf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9dvtf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.17,PodIP:10.244.1.73,StartTime:2020-11-18 06:24:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-11-18 06:24:33 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e5474899332bf8b45f13b6540880abf5838c6271deb30572232f9ba9eeb96497,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.73,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 18 06:24:43.071: INFO: Pod "webserver-deployment-dd94f59b7-cj9qk" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-cj9qk webserver-deployment-dd94f59b7- deployment-6406 /api/v1/namespaces/deployment-6406/pods/webserver-deployment-dd94f59b7-cj9qk b2ae2496-a573-4f28-852b-1c6e7d59199c 11980997 0 2020-11-18 06:24:39 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ba3ab135-9ff7-432d-8bf0-94cfc71dd36b 0x4000730df7 0x4000730df8}] [] [{kube-controller-manager Update v1 2020-11-18 06:24:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba3ab135-9ff7-432d-8bf0-94cfc71dd36b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-11-18 06:24:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9dvtf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9dvtf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9dvtf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.18,PodIP:,StartTime:2020-11-18 06:24:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 18 06:24:43.072: INFO: Pod "webserver-deployment-dd94f59b7-d9dpk" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-d9dpk webserver-deployment-dd94f59b7- deployment-6406 /api/v1/namespaces/deployment-6406/pods/webserver-deployment-dd94f59b7-d9dpk 785deb9a-0696-416d-8ec9-6fbcd9e200c9 11981038 0 2020-11-18 06:24:40 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ba3ab135-9ff7-432d-8bf0-94cfc71dd36b 0x4000730f87 0x4000730f88}] [] [{kube-controller-manager Update v1 2020-11-18 06:24:40 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba3ab135-9ff7-432d-8bf0-94cfc71dd36b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-11-18 06:24:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9dvtf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9dvtf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9dvtf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.17,PodIP:,StartTime:2020-11-18 06:24:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 18 06:24:43.073: INFO: Pod "webserver-deployment-dd94f59b7-fpzvv" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-fpzvv webserver-deployment-dd94f59b7- deployment-6406 /api/v1/namespaces/deployment-6406/pods/webserver-deployment-dd94f59b7-fpzvv 9eb4a553-7b85-4ba8-9526-7b1d4a2dfa74 11981033 0 2020-11-18 06:24:40 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ba3ab135-9ff7-432d-8bf0-94cfc71dd36b 0x4000731127 0x4000731128}] [] [{kube-controller-manager Update v1 2020-11-18 06:24:40 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba3ab135-9ff7-432d-8bf0-94cfc71dd36b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-11-18 06:24:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9dvtf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9dvtf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9dvtf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.18,PodIP:,StartTime:2020-11-18 06:24:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 18 06:24:43.074: INFO: Pod "webserver-deployment-dd94f59b7-gwpn5" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-gwpn5 webserver-deployment-dd94f59b7- deployment-6406 /api/v1/namespaces/deployment-6406/pods/webserver-deployment-dd94f59b7-gwpn5 4507a789-b8c1-4c07-932c-724b19e01162 11981017 0 2020-11-18 06:24:40 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ba3ab135-9ff7-432d-8bf0-94cfc71dd36b 0x40007312d7 0x40007312d8}] [] [{kube-controller-manager Update v1 2020-11-18 06:24:40 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba3ab135-9ff7-432d-8bf0-94cfc71dd36b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-11-18 06:24:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9dvtf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9dvtf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9dvtf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.17,PodIP:,StartTime:2020-11-18 06:24:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 18 06:24:43.075: INFO: Pod "webserver-deployment-dd94f59b7-hn26w" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-hn26w webserver-deployment-dd94f59b7- deployment-6406 /api/v1/namespaces/deployment-6406/pods/webserver-deployment-dd94f59b7-hn26w 0054c0fe-1181-4f6b-b1c7-b1a4601a56fa 11980828 0 2020-11-18 06:24:23 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ba3ab135-9ff7-432d-8bf0-94cfc71dd36b 0x4000731517 0x4000731518}] [] [{kube-controller-manager Update v1 2020-11-18 06:24:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba3ab135-9ff7-432d-8bf0-94cfc71dd36b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-11-18 06:24:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.246\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9dvtf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9dvtf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9dvtf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.18,PodIP:10.244.2.246,StartTime:2020-11-18 06:24:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-11-18 06:24:34 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e25fe625fc999f54e91d3fd0dea290d9e5935e551282f8c37a65b927e6bf3ef5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.246,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 18 06:24:43.076: INFO: Pod "webserver-deployment-dd94f59b7-hqlpx" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-hqlpx webserver-deployment-dd94f59b7- deployment-6406 /api/v1/namespaces/deployment-6406/pods/webserver-deployment-dd94f59b7-hqlpx e33ca20c-7e43-4c38-b649-2b3fea96796a 11981031 0 2020-11-18 06:24:40 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ba3ab135-9ff7-432d-8bf0-94cfc71dd36b 0x4000731797 0x4000731798}] [] [{kube-controller-manager Update v1 2020-11-18 06:24:40 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba3ab135-9ff7-432d-8bf0-94cfc71dd36b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-11-18 06:24:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9dvtf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9dvtf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9dvtf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.17,PodIP:,StartTime:2020-11-18 06:24:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 18 06:24:43.078: INFO: Pod "webserver-deployment-dd94f59b7-hwhtk" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-hwhtk webserver-deployment-dd94f59b7- deployment-6406 /api/v1/namespaces/deployment-6406/pods/webserver-deployment-dd94f59b7-hwhtk 0bfbdbb8-c7a5-4a36-9027-d7c2ff0dbc15 11980770 0 2020-11-18 06:24:23 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ba3ab135-9ff7-432d-8bf0-94cfc71dd36b 0x4003156087 0x4003156088}] [] [{kube-controller-manager Update v1 2020-11-18 06:24:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba3ab135-9ff7-432d-8bf0-94cfc71dd36b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-11-18 06:24:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.244\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9dvtf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9dvtf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9dvtf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.18,PodIP:10.244.2.244,StartTime:2020-11-18 06:24:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-11-18 06:24:27 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://19414c471750681f894c01adce96a7861537c391bc0c3a5fd050446e2d00e75e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.244,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 18 06:24:43.079: INFO: Pod "webserver-deployment-dd94f59b7-nqprx" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-nqprx webserver-deployment-dd94f59b7- deployment-6406 /api/v1/namespaces/deployment-6406/pods/webserver-deployment-dd94f59b7-nqprx e243668a-d0b6-4b07-8e94-a0c336c14749 11980838 0 2020-11-18 06:24:23 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ba3ab135-9ff7-432d-8bf0-94cfc71dd36b 0x40031565e7 0x40031565e8}] [] [{kube-controller-manager Update v1 2020-11-18 06:24:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba3ab135-9ff7-432d-8bf0-94cfc71dd36b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-11-18 06:24:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.74\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9dvtf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9dvtf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9dvtf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.17,PodIP:10.244.1.74,StartTime:2020-11-18 06:24:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-11-18 06:24:34 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://b77ee7aad79d71413a7c9aee2254ccbede3d112746baf8a4d88c5aa5a24c2d1b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.74,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 18 06:24:43.080: INFO: Pod "webserver-deployment-dd94f59b7-pfwt4" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-pfwt4 webserver-deployment-dd94f59b7- deployment-6406 /api/v1/namespaces/deployment-6406/pods/webserver-deployment-dd94f59b7-pfwt4 0cef3a18-eaeb-4a57-8ec8-1f6b36fbc1e3 11980831 0 2020-11-18 06:24:23 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ba3ab135-9ff7-432d-8bf0-94cfc71dd36b 0x40031568a7 0x40031568a8}] [] [{kube-controller-manager Update v1 2020-11-18 06:24:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba3ab135-9ff7-432d-8bf0-94cfc71dd36b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-11-18 06:24:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.75\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9dvtf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9dvtf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9dvtf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.17,PodIP:10.244.1.75,StartTime:2020-11-18 06:24:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-11-18 06:24:35 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://4d02c63f535995a36c0b73a6399a7431e04f0a879c178bf775230013c66ecdda,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.75,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 18 06:24:43.081: INFO: Pod "webserver-deployment-dd94f59b7-szrkd" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-szrkd webserver-deployment-dd94f59b7- deployment-6406 /api/v1/namespaces/deployment-6406/pods/webserver-deployment-dd94f59b7-szrkd 32c381b5-0c66-44c4-b446-0ad22bd0d7e8 11980791 0 2020-11-18 06:24:23 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ba3ab135-9ff7-432d-8bf0-94cfc71dd36b 0x4003156c87 0x4003156c88}] [] [{kube-controller-manager Update v1 2020-11-18 06:24:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba3ab135-9ff7-432d-8bf0-94cfc71dd36b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-11-18 06:24:31 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.71\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9dvtf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9dvtf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9dvtf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.17,PodIP:10.244.1.71,StartTime:2020-11-18 06:24:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-11-18 06:24:30 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://7e7f229bd58c69f92c9459071c350f819fdfa80ead00e68593871d7861d4c1c9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.71,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 18 06:24:43.082: INFO: Pod "webserver-deployment-dd94f59b7-v2zvc" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-v2zvc webserver-deployment-dd94f59b7- deployment-6406 /api/v1/namespaces/deployment-6406/pods/webserver-deployment-dd94f59b7-v2zvc 75209c89-5fcd-48af-9007-a95cb40609ee 11981036 0 2020-11-18 06:24:40 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ba3ab135-9ff7-432d-8bf0-94cfc71dd36b 0x4003157087 0x4003157088}] [] [{kube-controller-manager Update v1 2020-11-18 06:24:40 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba3ab135-9ff7-432d-8bf0-94cfc71dd36b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-11-18 06:24:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9dvtf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9dvtf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9dvtf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.18,PodIP:,StartTime:2020-11-18 06:24:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 18 06:24:43.083: INFO: Pod "webserver-deployment-dd94f59b7-wgkrl" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-wgkrl webserver-deployment-dd94f59b7- deployment-6406 /api/v1/namespaces/deployment-6406/pods/webserver-deployment-dd94f59b7-wgkrl 7862381e-db2b-4fb9-96b9-e759d6a1d8e0 11981025 0 2020-11-18 06:24:40 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ba3ab135-9ff7-432d-8bf0-94cfc71dd36b 0x4003157227 0x4003157228}] [] [{kube-controller-manager Update v1 2020-11-18 06:24:40 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba3ab135-9ff7-432d-8bf0-94cfc71dd36b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-11-18 06:24:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9dvtf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9dvtf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9dvtf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.17,PodIP:,StartTime:2020-11-18 06:24:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 18 06:24:43.085: INFO: Pod "webserver-deployment-dd94f59b7-x98wh" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-x98wh webserver-deployment-dd94f59b7- deployment-6406 /api/v1/namespaces/deployment-6406/pods/webserver-deployment-dd94f59b7-x98wh 96d1dc51-d88b-4409-a7dc-aeb06cad7a3f 11981042 0 2020-11-18 06:24:40 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ba3ab135-9ff7-432d-8bf0-94cfc71dd36b 0x40031573b7 0x40031573b8}] [] [{kube-controller-manager Update v1 2020-11-18 06:24:40 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba3ab135-9ff7-432d-8bf0-94cfc71dd36b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-11-18 06:24:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9dvtf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9dvtf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9dvtf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.18,PodIP:,StartTime:2020-11-18 06:24:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 18 06:24:43.086: INFO: Pod "webserver-deployment-dd94f59b7-xt99t" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-xt99t webserver-deployment-dd94f59b7- deployment-6406 /api/v1/namespaces/deployment-6406/pods/webserver-deployment-dd94f59b7-xt99t cff48471-9213-4b31-b612-4c8167bf8cd5 11981007 0 2020-11-18 06:24:39 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ba3ab135-9ff7-432d-8bf0-94cfc71dd36b 0x4003157547 0x4003157548}] [] [{kube-controller-manager Update v1 2020-11-18 06:24:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba3ab135-9ff7-432d-8bf0-94cfc71dd36b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-11-18 06:24:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9dvtf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9dvtf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9dvtf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.18,PodIP:,StartTime:2020-11-18 06:24:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 18 06:24:43.087: INFO: Pod "webserver-deployment-dd94f59b7-xwdt9" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-xwdt9 webserver-deployment-dd94f59b7- deployment-6406 /api/v1/namespaces/deployment-6406/pods/webserver-deployment-dd94f59b7-xwdt9 25b2555c-1f19-4627-8a54-5709e274536c 11981002 0 2020-11-18 06:24:40 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ba3ab135-9ff7-432d-8bf0-94cfc71dd36b 0x40031576d7 0x40031576d8}] [] [{kube-controller-manager Update v1 2020-11-18 06:24:40 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba3ab135-9ff7-432d-8bf0-94cfc71dd36b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-11-18 06:24:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9dvtf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9dvtf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9dvtf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.17,PodIP:,StartTime:2020-11-18 06:24:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 18 06:24:43.088: INFO: Pod "webserver-deployment-dd94f59b7-zr7ts" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-zr7ts webserver-deployment-dd94f59b7- deployment-6406 /api/v1/namespaces/deployment-6406/pods/webserver-deployment-dd94f59b7-zr7ts a1f10cc5-bdf7-4650-9651-1093d4146ad0 11980799 0 2020-11-18 06:24:23 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ba3ab135-9ff7-432d-8bf0-94cfc71dd36b 0x4003157867 0x4003157868}] [] [{kube-controller-manager Update v1 2020-11-18 06:24:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba3ab135-9ff7-432d-8bf0-94cfc71dd36b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-11-18 06:24:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.72\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9dvtf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9dvtf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9dvtf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.17,PodIP:10.244.1.72,StartTime:2020-11-18 06:24:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-11-18 06:24:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://80cc0e8d5cc8f9b50b9c5c8b9b12476e82970da57fb90e2704ffb5afd812d34b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.72,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 18 06:24:43.089: INFO: Pod "webserver-deployment-dd94f59b7-zt556" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-zt556 webserver-deployment-dd94f59b7- deployment-6406 /api/v1/namespaces/deployment-6406/pods/webserver-deployment-dd94f59b7-zt556 aa807789-0400-4025-be72-f862bbc879c1 11980823 0 2020-11-18 06:24:23 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 ba3ab135-9ff7-432d-8bf0-94cfc71dd36b 0x4003157a17 0x4003157a18}] [] [{kube-controller-manager Update v1 2020-11-18 06:24:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba3ab135-9ff7-432d-8bf0-94cfc71dd36b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-11-18 06:24:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.245\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9dvtf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9dvtf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9dvtf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:24:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.18,PodIP:10.244.2.245,StartTime:2020-11-18 06:24:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-11-18 06:24:33 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://870a5e4409e860844db5bc5c26db636eb01f19bc9c940ad95c2f8161db8f57a2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.245,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:24:43.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6406" for this suite. • [SLOW TEST:19.703 seconds] [sig-apps] Deployment /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":303,"completed":18,"skipped":469,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:24:43.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W1118 06:25:27.522608 10 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 18 06:26:29.566: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Nov 18 06:26:29.566: INFO: Deleting pod "simpletest.rc-5fnj2" in namespace "gc-2993" Nov 18 06:26:29.601: INFO: Deleting pod "simpletest.rc-8gx6g" in namespace "gc-2993" Nov 18 06:26:29.652: INFO: Deleting pod "simpletest.rc-dp2bl" in namespace "gc-2993" Nov 18 06:26:29.705: INFO: Deleting pod "simpletest.rc-f28bb" in namespace "gc-2993" Nov 18 06:26:29.765: INFO: Deleting pod "simpletest.rc-l8cd6" in namespace "gc-2993" Nov 18 06:26:30.323: INFO: Deleting pod "simpletest.rc-pxf4f" in namespace "gc-2993" Nov 18 06:26:30.949: INFO: Deleting pod "simpletest.rc-rf8gr" in namespace "gc-2993" Nov 18 06:26:31.131: INFO: Deleting pod "simpletest.rc-rk9wq" in namespace "gc-2993" Nov 18 06:26:31.379: INFO: Deleting pod "simpletest.rc-sp9nr" in namespace "gc-2993" Nov 18 06:26:31.496: INFO: Deleting pod "simpletest.rc-x89rw" in namespace "gc-2993" [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:26:31.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2993" for this suite. • [SLOW TEST:109.007 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":303,"completed":19,"skipped":480,"failed":0} SSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:26:32.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] deployment should delete old replica sets [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 18 06:26:32.705: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Nov 18 06:26:37.719: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Nov 18 06:26:37.720: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Nov 18 06:26:43.966: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-5953 /apis/apps/v1/namespaces/deployment-5953/deployments/test-cleanup-deployment 8535c302-8ad6-48c2-8658-b557faf48356 11981997 1 2020-11-18 06:26:37 +0000 UTC map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2020-11-18 06:26:37 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-11-18 06:26:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x400201df08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-11-18 06:26:37 +0000 UTC,LastTransitionTime:2020-11-18 06:26:37 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-5d446bdd47" has successfully progressed.,LastUpdateTime:2020-11-18 06:26:42 +0000 UTC,LastTransitionTime:2020-11-18 06:26:37 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Nov 18 06:26:43.973: INFO: New ReplicaSet "test-cleanup-deployment-5d446bdd47" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-5d446bdd47 deployment-5953 /apis/apps/v1/namespaces/deployment-5953/replicasets/test-cleanup-deployment-5d446bdd47 8778e934-2834-4d23-bf9d-688f6aa3599d 11981986 1 2020-11-18 06:26:37 +0000 UTC map[name:cleanup-pod pod-template-hash:5d446bdd47] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 8535c302-8ad6-48c2-8658-b557faf48356 0x4003500a47 0x4003500a48}] [] [{kube-controller-manager Update apps/v1 2020-11-18 06:26:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8535c302-8ad6-48c2-8658-b557faf48356\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 5d446bdd47,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:5d446bdd47] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x4003500ba8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Nov 18 06:26:43.980: INFO: Pod "test-cleanup-deployment-5d446bdd47-72hq7" is available: &Pod{ObjectMeta:{test-cleanup-deployment-5d446bdd47-72hq7 test-cleanup-deployment-5d446bdd47- deployment-5953 /api/v1/namespaces/deployment-5953/pods/test-cleanup-deployment-5d446bdd47-72hq7 edb540db-4e4f-470a-88c8-8e4ef69916a7 11981985 0 2020-11-18 06:26:37 +0000 UTC map[name:cleanup-pod pod-template-hash:5d446bdd47] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-5d446bdd47 8778e934-2834-4d23-bf9d-688f6aa3599d 0x40035011d7 0x40035011d8}] [] [{kube-controller-manager Update v1 2020-11-18 06:26:37 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8778e934-2834-4d23-bf9d-688f6aa3599d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-11-18 06:26:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.94\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-sbn7q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-sbn7q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-sbn7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:26:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:26:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:26:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:26:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.17,PodIP:10.244.1.94,StartTime:2020-11-18 06:26:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-11-18 06:26:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://22a7e2adb9ad9fea4e9aa2222cdd7b38481b9b4b1bed5d98b3c7d502a58223c4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.94,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:26:43.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5953" for this suite. • [SLOW TEST:11.822 seconds] [sig-apps] Deployment /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":303,"completed":20,"skipped":486,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:26:43.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:26:49.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7082" for this suite. • [SLOW TEST:5.173 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":303,"completed":21,"skipped":498,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:26:49.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-e24774a3-89f5-4b40-bed7-e125c98b512b STEP: Creating a pod to test consume configMaps Nov 18 06:26:49.446: INFO: Waiting up to 5m0s for pod "pod-configmaps-94cdebfc-1be6-4fed-8ea6-45279cd23db6" in namespace "configmap-9910" to be "Succeeded or Failed" Nov 18 06:26:49.515: INFO: Pod "pod-configmaps-94cdebfc-1be6-4fed-8ea6-45279cd23db6": Phase="Pending", Reason="", readiness=false. Elapsed: 68.607476ms Nov 18 06:26:51.521: INFO: Pod "pod-configmaps-94cdebfc-1be6-4fed-8ea6-45279cd23db6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074832262s Nov 18 06:26:53.528: INFO: Pod "pod-configmaps-94cdebfc-1be6-4fed-8ea6-45279cd23db6": Phase="Running", Reason="", readiness=true. Elapsed: 4.081442278s Nov 18 06:26:55.534: INFO: Pod "pod-configmaps-94cdebfc-1be6-4fed-8ea6-45279cd23db6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.087871477s STEP: Saw pod success Nov 18 06:26:55.534: INFO: Pod "pod-configmaps-94cdebfc-1be6-4fed-8ea6-45279cd23db6" satisfied condition "Succeeded or Failed" Nov 18 06:26:55.540: INFO: Trying to get logs from node leguer-worker2 pod pod-configmaps-94cdebfc-1be6-4fed-8ea6-45279cd23db6 container configmap-volume-test: STEP: delete the pod Nov 18 06:26:55.627: INFO: Waiting for pod pod-configmaps-94cdebfc-1be6-4fed-8ea6-45279cd23db6 to disappear Nov 18 06:26:55.631: INFO: Pod pod-configmaps-94cdebfc-1be6-4fed-8ea6-45279cd23db6 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:26:55.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9910" for this suite. • [SLOW TEST:6.472 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":303,"completed":22,"skipped":501,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:26:55.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-64e451f5-3934-4d1a-87fc-e1175a7cad6e STEP: Creating a pod to test consume configMaps Nov 18 06:26:55.748: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b9b52853-aa26-4dee-b440-861686fb10cc" in namespace "projected-3282" to be "Succeeded or Failed" Nov 18 06:26:55.771: INFO: Pod "pod-projected-configmaps-b9b52853-aa26-4dee-b440-861686fb10cc": Phase="Pending", Reason="", readiness=false. Elapsed: 22.969918ms Nov 18 06:26:57.779: INFO: Pod "pod-projected-configmaps-b9b52853-aa26-4dee-b440-861686fb10cc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030608223s Nov 18 06:26:59.786: INFO: Pod "pod-projected-configmaps-b9b52853-aa26-4dee-b440-861686fb10cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038160171s STEP: Saw pod success Nov 18 06:26:59.787: INFO: Pod "pod-projected-configmaps-b9b52853-aa26-4dee-b440-861686fb10cc" satisfied condition "Succeeded or Failed" Nov 18 06:26:59.792: INFO: Trying to get logs from node leguer-worker pod pod-projected-configmaps-b9b52853-aa26-4dee-b440-861686fb10cc container projected-configmap-volume-test: STEP: delete the pod Nov 18 06:26:59.846: INFO: Waiting for pod pod-projected-configmaps-b9b52853-aa26-4dee-b440-861686fb10cc to disappear Nov 18 06:26:59.849: INFO: Pod pod-projected-configmaps-b9b52853-aa26-4dee-b440-861686fb10cc no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:26:59.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3282" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":23,"skipped":534,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] LimitRange /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:26:59.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Nov 18 06:26:59.981: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Nov 18 06:26:59.998: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Nov 18 06:27:00.000: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Nov 18 06:27:00.063: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Nov 18 06:27:00.064: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Nov 18 06:27:00.105: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Nov 18 06:27:00.106: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Nov 18 06:27:07.445: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:27:07.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-1933" for this suite. • [SLOW TEST:7.656 seconds] [sig-scheduling] LimitRange /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":303,"completed":24,"skipped":550,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:27:07.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:27:24.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4026" for this suite. • [SLOW TEST:17.202 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":303,"completed":25,"skipped":567,"failed":0} SSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:27:24.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Nov 18 06:27:24.819: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 18 06:27:24.839: INFO: Waiting for terminating namespaces to be deleted... Nov 18 06:27:24.848: INFO: Logging pods the apiserver thinks is on node leguer-worker before test Nov 18 06:27:24.857: INFO: kindnet-lc95n from kube-system started at 2020-10-04 09:51:31 +0000 UTC (1 container statuses recorded) Nov 18 06:27:24.857: INFO: Container kindnet-cni ready: true, restart count 1 Nov 18 06:27:24.857: INFO: kube-proxy-bmzvg from kube-system started at 2020-10-04 09:51:31 +0000 UTC (1 container statuses recorded) Nov 18 06:27:24.857: INFO: Container kube-proxy ready: true, restart count 0 Nov 18 06:27:24.857: INFO: pod-no-resources from limitrange-1933 started at 2020-11-18 06:27:00 +0000 UTC (1 container statuses recorded) Nov 18 06:27:24.857: INFO: Container pause ready: false, restart count 0 Nov 18 06:27:24.857: INFO: Logging pods the apiserver thinks is on node leguer-worker2 before test Nov 18 06:27:24.867: INFO: rally-1fd7609f-vznw6im3 from c-rally-1fd7609f-zyxivco9 started at 2020-11-18 06:26:50 +0000 UTC (1 container statuses recorded) Nov 18 06:27:24.867: INFO: Container rally-1fd7609f-vznw6im3 ready: true, restart count 0 Nov 18 06:27:24.867: INFO: kindnet-nffr7 from kube-system started at 2020-10-04 09:51:31 +0000 UTC (1 container statuses recorded) Nov 18 06:27:24.867: INFO: Container kindnet-cni ready: true, restart count 1 Nov 18 06:27:24.867: INFO: kube-proxy-sxhc5 from kube-system started at 2020-10-04 09:51:30 +0000 UTC (1 container statuses recorded) Nov 18 06:27:24.867: INFO: Container kube-proxy ready: true, restart count 0 Nov 18 06:27:24.867: INFO: pfpod2 from limitrange-1933 started at 2020-11-18 06:27:07 +0000 UTC (1 container statuses recorded) Nov 18 06:27:24.868: INFO: Container pause ready: false, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: verifying the node has the label node leguer-worker STEP: verifying the node has the label node leguer-worker2 Nov 18 06:27:24.970: INFO: Pod rally-1fd7609f-vznw6im3 requesting resource cpu=0m on Node leguer-worker2 Nov 18 06:27:24.970: INFO: Pod kindnet-lc95n requesting resource cpu=100m on Node leguer-worker Nov 18 06:27:24.970: INFO: Pod kindnet-nffr7 requesting resource cpu=100m on Node leguer-worker2 Nov 18 06:27:24.970: INFO: Pod kube-proxy-bmzvg requesting resource cpu=0m on Node leguer-worker Nov 18 06:27:24.970: INFO: Pod kube-proxy-sxhc5 requesting resource cpu=0m on Node leguer-worker2 Nov 18 06:27:24.970: INFO: Pod pfpod2 requesting resource cpu=600m on Node leguer-worker2 Nov 18 06:27:24.971: INFO: Pod pod-no-resources requesting resource cpu=100m on Node leguer-worker STEP: Starting Pods to consume most of the cluster CPU. Nov 18 06:27:24.971: INFO: Creating a pod which consumes cpu=11060m on Node leguer-worker Nov 18 06:27:24.985: INFO: Creating a pod which consumes cpu=10710m on Node leguer-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-a3528126-0250-4dc8-a953-585877cf5032.16488638a42b2b4f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5069/filler-pod-a3528126-0250-4dc8-a953-585877cf5032 to leguer-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-6d40088b-91b3-4bbd-af27-0dff75c61cd8.16488638a78f9a9f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5069/filler-pod-6d40088b-91b3-4bbd-af27-0dff75c61cd8 to leguer-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-6d40088b-91b3-4bbd-af27-0dff75c61cd8.16488639a38d9ad6], Reason = [Started], Message = [Started container filler-pod-6d40088b-91b3-4bbd-af27-0dff75c61cd8] STEP: Considering event: Type = [Normal], Name = [filler-pod-a3528126-0250-4dc8-a953-585877cf5032.16488638f681ec70], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-a3528126-0250-4dc8-a953-585877cf5032.164886398ee4c616], Reason = [Started], Message = [Started container filler-pod-a3528126-0250-4dc8-a953-585877cf5032] STEP: Considering event: Type = [Normal], Name = [filler-pod-6d40088b-91b3-4bbd-af27-0dff75c61cd8.1648863993769816], Reason = [Created], Message = [Created container filler-pod-6d40088b-91b3-4bbd-af27-0dff75c61cd8] STEP: Considering event: Type = [Normal], Name = [filler-pod-a3528126-0250-4dc8-a953-585877cf5032.164886397a05687f], Reason = [Created], Message = [Created container filler-pod-a3528126-0250-4dc8-a953-585877cf5032] STEP: Considering event: Type = [Normal], Name = [filler-pod-6d40088b-91b3-4bbd-af27-0dff75c61cd8.164886393e400a96], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Warning], Name = [additional-pod.1648863a15c68fc7], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: Considering event: Type = [Warning], Name = [additional-pod.1648863a1c560b96], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node leguer-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node leguer-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:27:32.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5069" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:7.582 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":303,"completed":26,"skipped":572,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:27:32.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium Nov 18 06:27:32.386: INFO: Waiting up to 5m0s for pod "pod-3db90148-4966-477a-9920-679b77605537" in namespace "emptydir-3623" to be "Succeeded or Failed" Nov 18 06:27:32.390: INFO: Pod "pod-3db90148-4966-477a-9920-679b77605537": Phase="Pending", Reason="", readiness=false. Elapsed: 4.283873ms Nov 18 06:27:34.774: INFO: Pod "pod-3db90148-4966-477a-9920-679b77605537": Phase="Pending", Reason="", readiness=false. Elapsed: 2.387688819s Nov 18 06:27:36.779: INFO: Pod "pod-3db90148-4966-477a-9920-679b77605537": Phase="Running", Reason="", readiness=true. Elapsed: 4.392912362s Nov 18 06:27:38.790: INFO: Pod "pod-3db90148-4966-477a-9920-679b77605537": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.404219158s STEP: Saw pod success Nov 18 06:27:38.791: INFO: Pod "pod-3db90148-4966-477a-9920-679b77605537" satisfied condition "Succeeded or Failed" Nov 18 06:27:38.795: INFO: Trying to get logs from node leguer-worker2 pod pod-3db90148-4966-477a-9920-679b77605537 container test-container: STEP: delete the pod Nov 18 06:27:38.856: INFO: Waiting for pod pod-3db90148-4966-477a-9920-679b77605537 to disappear Nov 18 06:27:38.864: INFO: Pod pod-3db90148-4966-477a-9920-679b77605537 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:27:38.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3623" for this suite. • [SLOW TEST:6.564 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":27,"skipped":630,"failed":0} S ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:27:38.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 18 06:27:38.976: INFO: Waiting up to 5m0s for pod "busybox-user-65534-e8c15e84-7178-4d02-8cae-ff751479fdf6" in namespace "security-context-test-8220" to be "Succeeded or Failed" Nov 18 06:27:38.984: INFO: Pod "busybox-user-65534-e8c15e84-7178-4d02-8cae-ff751479fdf6": Phase="Pending", Reason="", readiness=false. Elapsed: 7.326435ms Nov 18 06:27:40.991: INFO: Pod "busybox-user-65534-e8c15e84-7178-4d02-8cae-ff751479fdf6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014788358s Nov 18 06:27:42.999: INFO: Pod "busybox-user-65534-e8c15e84-7178-4d02-8cae-ff751479fdf6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02261457s Nov 18 06:27:42.999: INFO: Pod "busybox-user-65534-e8c15e84-7178-4d02-8cae-ff751479fdf6" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:27:43.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8220" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":28,"skipped":631,"failed":0} SSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:27:43.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating server pod server in namespace prestop-3550 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-3550 STEP: Deleting pre-stop pod Nov 18 06:27:58.622: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:27:58.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-3550" for this suite. • [SLOW TEST:15.718 seconds] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should call prestop when killing a pod [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":303,"completed":29,"skipped":635,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:27:58.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create deployment with httpd image Nov 18 06:27:58.886: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config create -f -' Nov 18 06:28:02.849: INFO: stderr: "" Nov 18 06:28:02.849: INFO: stdout: "deployment.apps/httpd-deployment created\n" STEP: verify diff finds difference between live and declared image Nov 18 06:28:02.850: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config diff -f -' Nov 18 06:28:06.888: INFO: rc: 1 Nov 18 06:28:06.892: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config delete -f -' Nov 18 06:28:08.310: INFO: stderr: "" Nov 18 06:28:08.311: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:28:08.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1187" for this suite. • [SLOW TEST:9.592 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl diff /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:888 should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":303,"completed":30,"skipped":648,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:28:08.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should create services for rc [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC Nov 18 06:28:08.426: INFO: namespace kubectl-8197 Nov 18 06:28:08.426: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8197' Nov 18 06:28:11.337: INFO: stderr: "" Nov 18 06:28:11.337: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Nov 18 06:28:12.365: INFO: Selector matched 1 pods for map[app:agnhost] Nov 18 06:28:12.366: INFO: Found 0 / 1 Nov 18 06:28:13.383: INFO: Selector matched 1 pods for map[app:agnhost] Nov 18 06:28:13.383: INFO: Found 0 / 1 Nov 18 06:28:14.527: INFO: Selector matched 1 pods for map[app:agnhost] Nov 18 06:28:14.527: INFO: Found 0 / 1 Nov 18 06:28:15.346: INFO: Selector matched 1 pods for map[app:agnhost] Nov 18 06:28:15.346: INFO: Found 0 / 1 Nov 18 06:28:16.345: INFO: Selector matched 1 pods for map[app:agnhost] Nov 18 06:28:16.346: INFO: Found 1 / 1 Nov 18 06:28:16.346: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Nov 18 06:28:16.388: INFO: Selector matched 1 pods for map[app:agnhost] Nov 18 06:28:16.388: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Nov 18 06:28:16.388: INFO: wait on agnhost-primary startup in kubectl-8197 Nov 18 06:28:16.389: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config logs agnhost-primary-phvjx agnhost-primary --namespace=kubectl-8197' Nov 18 06:28:17.799: INFO: stderr: "" Nov 18 06:28:17.799: INFO: stdout: "Paused\n" STEP: exposing RC Nov 18 06:28:17.800: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-8197' Nov 18 06:28:19.402: INFO: stderr: "" Nov 18 06:28:19.402: INFO: stdout: "service/rm2 exposed\n" Nov 18 06:28:19.701: INFO: Service rm2 in namespace kubectl-8197 found. STEP: exposing service Nov 18 06:28:21.723: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-8197' Nov 18 06:28:23.201: INFO: stderr: "" Nov 18 06:28:23.201: INFO: stdout: "service/rm3 exposed\n" Nov 18 06:28:23.226: INFO: Service rm3 in namespace kubectl-8197 found. [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:28:25.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8197" for this suite. • [SLOW TEST:16.932 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1246 should create services for rc [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":303,"completed":31,"skipped":673,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:28:25.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:28:25.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4343" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":303,"completed":32,"skipped":709,"failed":0} S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:28:25.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-c1d3f485-bd6b-4ddf-a9cb-55cbfae82de5 STEP: Creating a pod to test consume configMaps Nov 18 06:28:25.847: INFO: Waiting up to 5m0s for pod "pod-configmaps-f1d80811-8638-4ae4-b8d7-f48fdede7671" in namespace "configmap-718" to be "Succeeded or Failed" Nov 18 06:28:25.873: INFO: Pod "pod-configmaps-f1d80811-8638-4ae4-b8d7-f48fdede7671": Phase="Pending", Reason="", readiness=false. Elapsed: 25.343705ms Nov 18 06:28:27.881: INFO: Pod "pod-configmaps-f1d80811-8638-4ae4-b8d7-f48fdede7671": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033587031s Nov 18 06:28:29.917: INFO: Pod "pod-configmaps-f1d80811-8638-4ae4-b8d7-f48fdede7671": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06933106s Nov 18 06:28:32.126: INFO: Pod "pod-configmaps-f1d80811-8638-4ae4-b8d7-f48fdede7671": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.278445698s STEP: Saw pod success Nov 18 06:28:32.126: INFO: Pod "pod-configmaps-f1d80811-8638-4ae4-b8d7-f48fdede7671" satisfied condition "Succeeded or Failed" Nov 18 06:28:32.154: INFO: Trying to get logs from node leguer-worker2 pod pod-configmaps-f1d80811-8638-4ae4-b8d7-f48fdede7671 container configmap-volume-test: STEP: delete the pod Nov 18 06:28:32.207: INFO: Waiting for pod pod-configmaps-f1d80811-8638-4ae4-b8d7-f48fdede7671 to disappear Nov 18 06:28:32.220: INFO: Pod pod-configmaps-f1d80811-8638-4ae4-b8d7-f48fdede7671 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:28:32.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-718" for this suite. • [SLOW TEST:6.513 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":33,"skipped":710,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:28:32.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-9359 Nov 18 06:28:36.412: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-9359 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Nov 18 06:28:38.167: INFO: stderr: "I1118 06:28:37.924246 585 log.go:181] (0x4000836000) (0x4000b360a0) Create stream\nI1118 06:28:37.931001 585 log.go:181] (0x4000836000) (0x4000b360a0) Stream added, broadcasting: 1\nI1118 06:28:37.957813 585 log.go:181] (0x4000836000) Reply frame received for 1\nI1118 06:28:37.962092 585 log.go:181] (0x4000836000) (0x40001c7ea0) Create stream\nI1118 06:28:37.962404 585 log.go:181] (0x4000836000) (0x40001c7ea0) Stream added, broadcasting: 3\nI1118 06:28:37.965972 585 log.go:181] (0x4000836000) Reply frame received for 3\nI1118 06:28:37.966470 585 log.go:181] (0x4000836000) (0x40002a2fa0) Create stream\nI1118 06:28:37.966576 585 log.go:181] (0x4000836000) (0x40002a2fa0) Stream added, broadcasting: 5\nI1118 06:28:37.968012 585 log.go:181] (0x4000836000) Reply frame received for 5\nI1118 06:28:38.036599 585 log.go:181] (0x4000836000) Data frame received for 5\nI1118 06:28:38.037053 585 log.go:181] (0x40002a2fa0) (5) Data frame handling\nI1118 06:28:38.037800 585 log.go:181] (0x40002a2fa0) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI1118 06:28:38.144233 585 log.go:181] (0x4000836000) Data frame received for 5\nI1118 06:28:38.144430 585 log.go:181] (0x40002a2fa0) (5) Data frame handling\nI1118 06:28:38.144680 585 log.go:181] (0x4000836000) Data frame received for 3\nI1118 06:28:38.145000 585 log.go:181] (0x40001c7ea0) (3) Data frame handling\nI1118 06:28:38.145200 585 log.go:181] (0x40001c7ea0) (3) Data frame sent\nI1118 06:28:38.145397 585 log.go:181] (0x4000836000) Data frame received for 3\nI1118 06:28:38.145565 585 log.go:181] (0x40001c7ea0) (3) Data frame handling\nI1118 06:28:38.148065 585 log.go:181] (0x4000836000) Data frame received for 1\nI1118 06:28:38.148168 585 log.go:181] (0x4000b360a0) (1) Data frame handling\nI1118 06:28:38.148266 585 log.go:181] (0x4000b360a0) (1) Data frame sent\nI1118 06:28:38.149684 585 log.go:181] (0x4000836000) (0x4000b360a0) Stream removed, broadcasting: 1\nI1118 06:28:38.152350 585 log.go:181] (0x4000836000) Go away received\nI1118 06:28:38.153912 585 log.go:181] (0x4000836000) (0x4000b360a0) Stream removed, broadcasting: 1\nI1118 06:28:38.154249 585 log.go:181] (0x4000836000) (0x40001c7ea0) Stream removed, broadcasting: 3\nI1118 06:28:38.154637 585 log.go:181] (0x4000836000) (0x40002a2fa0) Stream removed, broadcasting: 5\n" Nov 18 06:28:38.168: INFO: stdout: "iptables" Nov 18 06:28:38.168: INFO: proxyMode: iptables Nov 18 06:28:38.176: INFO: Waiting for pod kube-proxy-mode-detector to disappear Nov 18 06:28:38.402: INFO: Pod kube-proxy-mode-detector still exists Nov 18 06:28:40.402: INFO: Waiting for pod kube-proxy-mode-detector to disappear Nov 18 06:28:40.408: INFO: Pod kube-proxy-mode-detector still exists Nov 18 06:28:42.402: INFO: Waiting for pod kube-proxy-mode-detector to disappear Nov 18 06:28:42.411: INFO: Pod kube-proxy-mode-detector still exists Nov 18 06:28:44.402: INFO: Waiting for pod kube-proxy-mode-detector to disappear Nov 18 06:28:44.410: INFO: Pod kube-proxy-mode-detector still exists Nov 18 06:28:46.402: INFO: Waiting for pod kube-proxy-mode-detector to disappear Nov 18 06:28:46.410: INFO: Pod kube-proxy-mode-detector still exists Nov 18 06:28:48.402: INFO: Waiting for pod kube-proxy-mode-detector to disappear Nov 18 06:28:48.410: INFO: Pod kube-proxy-mode-detector still exists Nov 18 06:28:50.402: INFO: Waiting for pod kube-proxy-mode-detector to disappear Nov 18 06:28:50.410: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-9359 STEP: creating replication controller affinity-clusterip-timeout in namespace services-9359 I1118 06:28:50.459231 10 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-9359, replica count: 3 I1118 06:28:53.510867 10 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1118 06:28:56.511630 10 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1118 06:28:59.512527 10 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 18 06:28:59.524: INFO: Creating new exec pod Nov 18 06:29:06.594: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-9359 execpod-affinityj57k8 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Nov 18 06:29:08.281: INFO: stderr: "I1118 06:29:08.113440 605 log.go:181] (0x4000642210) (0x4000ad0000) Create stream\nI1118 06:29:08.117851 605 log.go:181] (0x4000642210) (0x4000ad0000) Stream added, broadcasting: 1\nI1118 06:29:08.180101 605 log.go:181] (0x4000642210) Reply frame received for 1\nI1118 06:29:08.180918 605 log.go:181] (0x4000642210) (0x4000ad00a0) Create stream\nI1118 06:29:08.180988 605 log.go:181] (0x4000642210) (0x4000ad00a0) Stream added, broadcasting: 3\nI1118 06:29:08.182190 605 log.go:181] (0x4000642210) Reply frame received for 3\nI1118 06:29:08.182450 605 log.go:181] (0x4000642210) (0x40009b4000) Create stream\nI1118 06:29:08.182510 605 log.go:181] (0x4000642210) (0x40009b4000) Stream added, broadcasting: 5\nI1118 06:29:08.183857 605 log.go:181] (0x4000642210) Reply frame received for 5\nI1118 06:29:08.250932 605 log.go:181] (0x4000642210) Data frame received for 5\nI1118 06:29:08.251102 605 log.go:181] (0x40009b4000) (5) Data frame handling\nI1118 06:29:08.251460 605 log.go:181] (0x40009b4000) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip-timeout 80\nI1118 06:29:08.263658 605 log.go:181] (0x4000642210) Data frame received for 5\nI1118 06:29:08.263743 605 log.go:181] (0x40009b4000) (5) Data frame handling\nI1118 06:29:08.263839 605 log.go:181] (0x40009b4000) (5) Data frame sent\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\nI1118 06:29:08.263933 605 log.go:181] (0x4000642210) Data frame received for 5\nI1118 06:29:08.264046 605 log.go:181] (0x4000642210) Data frame received for 3\nI1118 06:29:08.264185 605 log.go:181] (0x4000ad00a0) (3) Data frame handling\nI1118 06:29:08.264286 605 log.go:181] (0x40009b4000) (5) Data frame handling\nI1118 06:29:08.265036 605 log.go:181] (0x4000642210) Data frame received for 1\nI1118 06:29:08.265110 605 log.go:181] (0x4000ad0000) (1) Data frame handling\nI1118 06:29:08.265186 605 log.go:181] (0x4000ad0000) (1) Data frame sent\nI1118 06:29:08.266682 605 log.go:181] (0x4000642210) (0x4000ad0000) Stream removed, broadcasting: 1\nI1118 06:29:08.268637 605 log.go:181] (0x4000642210) Go away received\nI1118 06:29:08.271902 605 log.go:181] (0x4000642210) (0x4000ad0000) Stream removed, broadcasting: 1\nI1118 06:29:08.272230 605 log.go:181] (0x4000642210) (0x4000ad00a0) Stream removed, broadcasting: 3\nI1118 06:29:08.272451 605 log.go:181] (0x4000642210) (0x40009b4000) Stream removed, broadcasting: 5\n" Nov 18 06:29:08.282: INFO: stdout: "" Nov 18 06:29:08.285: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-9359 execpod-affinityj57k8 -- /bin/sh -x -c nc -zv -t -w 2 10.103.235.182 80' Nov 18 06:29:10.001: INFO: stderr: "I1118 06:29:09.847588 625 log.go:181] (0x40007f8000) (0x4000426280) Create stream\nI1118 06:29:09.853548 625 log.go:181] (0x40007f8000) (0x4000426280) Stream added, broadcasting: 1\nI1118 06:29:09.866175 625 log.go:181] (0x40007f8000) Reply frame received for 1\nI1118 06:29:09.866971 625 log.go:181] (0x40007f8000) (0x400030a6e0) Create stream\nI1118 06:29:09.867048 625 log.go:181] (0x40007f8000) (0x400030a6e0) Stream added, broadcasting: 3\nI1118 06:29:09.868527 625 log.go:181] (0x40007f8000) Reply frame received for 3\nI1118 06:29:09.868797 625 log.go:181] (0x40007f8000) (0x4000716140) Create stream\nI1118 06:29:09.868934 625 log.go:181] (0x40007f8000) (0x4000716140) Stream added, broadcasting: 5\nI1118 06:29:09.869968 625 log.go:181] (0x40007f8000) Reply frame received for 5\nI1118 06:29:09.945241 625 log.go:181] (0x40007f8000) Data frame received for 3\nI1118 06:29:09.945511 625 log.go:181] (0x400030a6e0) (3) Data frame handling\nI1118 06:29:09.946430 625 log.go:181] (0x40007f8000) Data frame received for 5\nI1118 06:29:09.946538 625 log.go:181] (0x4000716140) (5) Data frame handling\nI1118 06:29:09.947011 625 log.go:181] (0x4000716140) (5) Data frame sent\n+ nc -zv -t -w 2 10.103.235.182 80\nConnection to 10.103.235.182 80 port [tcp/http] succeeded!\nI1118 06:29:09.947924 625 log.go:181] (0x40007f8000) Data frame received for 5\nI1118 06:29:09.948009 625 log.go:181] (0x4000716140) (5) Data frame handling\nI1118 06:29:09.949981 625 log.go:181] (0x40007f8000) Data frame received for 1\nI1118 06:29:09.950062 625 log.go:181] (0x4000426280) (1) Data frame handling\nI1118 06:29:09.950154 625 log.go:181] (0x4000426280) (1) Data frame sent\nI1118 06:29:09.954360 625 log.go:181] (0x40007f8000) (0x4000426280) Stream removed, broadcasting: 1\nI1118 06:29:09.992144 625 log.go:181] (0x40007f8000) (0x4000426280) Stream removed, broadcasting: 1\nI1118 06:29:09.992501 625 log.go:181] (0x40007f8000) (0x400030a6e0) Stream removed, broadcasting: 3\nI1118 06:29:09.992785 625 log.go:181] (0x40007f8000) (0x4000716140) Stream removed, broadcasting: 5\n" Nov 18 06:29:10.002: INFO: stdout: "" Nov 18 06:29:10.002: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-9359 execpod-affinityj57k8 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.103.235.182:80/ ; done' Nov 18 06:29:11.721: INFO: stderr: "I1118 06:29:11.484261 645 log.go:181] (0x40005ca000) (0x4000bd40a0) Create stream\nI1118 06:29:11.487211 645 log.go:181] (0x40005ca000) (0x4000bd40a0) Stream added, broadcasting: 1\nI1118 06:29:11.494864 645 log.go:181] (0x40005ca000) Reply frame received for 1\nI1118 06:29:11.495389 645 log.go:181] (0x40005ca000) (0x4000e10320) Create stream\nI1118 06:29:11.495441 645 log.go:181] (0x40005ca000) (0x4000e10320) Stream added, broadcasting: 3\nI1118 06:29:11.497031 645 log.go:181] (0x40005ca000) Reply frame received for 3\nI1118 06:29:11.497428 645 log.go:181] (0x40005ca000) (0x4000e103c0) Create stream\nI1118 06:29:11.497512 645 log.go:181] (0x40005ca000) (0x4000e103c0) Stream added, broadcasting: 5\nI1118 06:29:11.499201 645 log.go:181] (0x40005ca000) Reply frame received for 5\nI1118 06:29:11.596450 645 log.go:181] (0x40005ca000) Data frame received for 5\nI1118 06:29:11.596804 645 log.go:181] (0x4000e103c0) (5) Data frame handling\nI1118 06:29:11.597635 645 log.go:181] (0x4000e103c0) (5) Data frame sent\n+ seq 0 15\nI1118 06:29:11.626163 645 log.go:181] (0x40005ca000) Data frame received for 5\nI1118 06:29:11.626764 645 log.go:181] (0x4000e103c0) (5) Data frame handling\nI1118 06:29:11.627334 645 log.go:181] (0x4000e103c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.235.182:80/\nI1118 06:29:11.632521 645 log.go:181] (0x40005ca000) Data frame received for 3\nI1118 06:29:11.632655 645 log.go:181] (0x4000e10320) (3) Data frame handling\nI1118 06:29:11.632742 645 log.go:181] (0x4000e10320) (3) Data frame sent\nI1118 06:29:11.632915 645 log.go:181] (0x40005ca000) Data frame received for 5\nI1118 06:29:11.633044 645 log.go:181] (0x4000e103c0) (5) Data frame handling\nI1118 06:29:11.633131 645 log.go:181] (0x4000e103c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.235.182:80/\nI1118 06:29:11.634127 645 log.go:181] (0x40005ca000) Data frame received for 5\nI1118 06:29:11.634266 645 log.go:181] (0x4000e103c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.235.182:80/\nI1118 06:29:11.634359 645 log.go:181] (0x40005ca000) Data frame received for 3\nI1118 06:29:11.634490 645 log.go:181] (0x4000e10320) (3) Data frame handling\nI1118 06:29:11.634584 645 log.go:181] (0x4000e103c0) (5) Data frame sent\nI1118 06:29:11.634685 645 log.go:181] (0x4000e10320) (3) Data frame sent\nI1118 06:29:11.634788 645 log.go:181] (0x40005ca000) Data frame received for 3\nI1118 06:29:11.634897 645 log.go:181] (0x4000e10320) (3) Data frame handling\nI1118 06:29:11.635033 645 log.go:181] (0x4000e10320) (3) Data frame sent\nI1118 06:29:11.635137 645 log.go:181] (0x40005ca000) Data frame received for 3\nI1118 06:29:11.635232 645 log.go:181] (0x4000e10320) (3) Data frame handling\nI1118 06:29:11.635358 645 log.go:181] (0x40005ca000) Data frame received for 5\nI1118 06:29:11.635476 645 log.go:181] (0x4000e103c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.235.182:80/\nI1118 06:29:11.635540 645 log.go:181] (0x4000e10320) (3) Data frame sent\nI1118 06:29:11.635641 645 log.go:181] (0x40005ca000) Data frame received for 3\nI1118 06:29:11.635712 645 log.go:181] (0x4000e103c0) (5) Data frame sent\nI1118 06:29:11.635801 645 log.go:181] (0x4000e10320) (3) Data frame handling\nI1118 06:29:11.635879 645 log.go:181] (0x4000e10320) (3) Data frame sent\nI1118 06:29:11.635946 645 log.go:181] (0x40005ca000) Data frame received for 3\nI1118 06:29:11.636007 645 log.go:181] (0x4000e10320) (3) Data frame handling\nI1118 06:29:11.636107 645 log.go:181] (0x4000e10320) (3) Data frame sent\nI1118 06:29:11.636196 645 log.go:181] (0x40005ca000) Data frame received for 3\nI1118 06:29:11.636265 645 log.go:181] (0x4000e10320) (3) Data frame handling\nI1118 06:29:11.636364 645 log.go:181] (0x4000e10320) (3) Data frame sent\nI1118 06:29:11.638659 645 log.go:181] (0x40005ca000) Data frame received for 3\nI1118 06:29:11.638774 645 log.go:181] (0x4000e10320) (3) Data frame handling\nI1118 06:29:11.638879 645 log.go:181] (0x4000e10320) (3) Data frame sent\nI1118 06:29:11.641290 645 log.go:181] (0x40005ca000) Data frame received for 3\nI1118 06:29:11.641450 645 log.go:181] (0x4000e10320) (3) Data frame handling\nI1118 06:29:11.641597 645 log.go:181] (0x4000e10320) (3) Data frame sent\nI1118 06:29:11.641730 645 log.go:181] (0x40005ca000) Data frame received for 5\nI1118 06:29:11.641865 645 log.go:181] (0x4000e103c0) (5) Data frame handling\nI1118 06:29:11.642053 645 log.go:181] (0x4000e103c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.235.182:80/\nI1118 06:29:11.645447 645 log.go:181] (0x40005ca000) Data frame received for 3\nI1118 06:29:11.645524 645 log.go:181] (0x4000e10320) (3) Data frame handling\nI1118 06:29:11.645589 645 log.go:181] (0x4000e10320) (3) Data frame sent\nI1118 06:29:11.645685 645 log.go:181] (0x40005ca000) Data frame received for 5\nI1118 06:29:11.645806 645 log.go:181] (0x4000e103c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.235.182:80/\nI1118 06:29:11.645919 645 log.go:181] (0x40005ca000) Data frame received for 3\nI1118 06:29:11.646013 645 log.go:181] (0x4000e10320) (3) Data frame handling\nI1118 06:29:11.646086 645 log.go:181] (0x4000e10320) (3) Data frame sent\nI1118 06:29:11.646156 645 log.go:181] (0x4000e103c0) (5) Data frame sent\nI1118 06:29:11.649469 645 log.go:181] (0x40005ca000) Data frame received for 3\nI1118 06:29:11.649557 645 log.go:181] (0x4000e10320) (3) Data frame handling\nI1118 06:29:11.649641 645 log.go:181] (0x4000e10320) (3) Data frame sent\nI1118 06:29:11.649894 645 log.go:181] (0x40005ca000) Data frame received for 5\nI1118 06:29:11.649943 645 log.go:181] (0x4000e103c0) (5) Data frame handling\nI1118 06:29:11.650001 645 log.go:181] (0x4000e103c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.235.182:80/\nI1118 06:29:11.650076 645 log.go:181] (0x40005ca000) Data frame received for 3\nI1118 06:29:11.650120 645 log.go:181] (0x4000e10320) (3) Data frame handling\nI1118 06:29:11.650167 645 log.go:181] (0x4000e10320) (3) Data frame sent\nI1118 06:29:11.654102 645 log.go:181] (0x40005ca000) Data frame received for 3\nI1118 06:29:11.654182 645 log.go:181] (0x4000e10320) (3) Data frame handling\nI1118 06:29:11.654263 645 log.go:181] (0x4000e10320) (3) Data frame sent\nI1118 06:29:11.654607 645 log.go:181] (0x40005ca000) Data frame received for 5\nI1118 06:29:11.654679 645 log.go:181] (0x4000e103c0) (5) Data frame handling\nI1118 06:29:11.654731 645 log.go:181] (0x4000e103c0) (5) Data frame sent\nI1118 06:29:11.654795 645 log.go:181] (0x40005ca000) Data frame received for 3\nI1118 06:29:11.654847 645 log.go:181] (0x4000e10320) (3) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.235.182:80/\nI1118 06:29:11.654902 645 log.go:181] (0x4000e10320) (3) Data frame sent\nI1118 06:29:11.659240 645 log.go:181] (0x40005ca000) Data frame received for 3\nI1118 06:29:11.659366 645 log.go:181] (0x4000e10320) (3) Data frame handling\nI1118 06:29:11.659440 645 log.go:181] (0x40005ca000) Data frame received for 5\nI1118 06:29:11.659543 645 log.go:181] (0x4000e103c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.235.182:80/\nI1118 06:29:11.659628 645 log.go:181] (0x4000e10320) (3) Data frame sent\nI1118 06:29:11.659710 645 log.go:181] (0x40005ca000) Data frame received for 3\nI1118 06:29:11.659782 645 log.go:181] (0x4000e10320) (3) Data frame handling\nI1118 06:29:11.659858 645 log.go:181] (0x4000e103c0) (5) Data frame sent\nI1118 06:29:11.659943 645 log.go:181] (0x4000e10320) (3) Data frame sent\nI1118 06:29:11.665100 645 log.go:181] (0x40005ca000) Data frame received for 3\nI1118 06:29:11.665178 645 log.go:181] (0x4000e10320) (3) Data frame handling\nI1118 06:29:11.665247 645 log.go:181] (0x4000e10320) (3) Data frame sent\nI1118 06:29:11.665459 645 log.go:181] (0x40005ca000) Data frame received for 3\nI1118 06:29:11.665570 645 log.go:181] (0x4000e10320) (3) Data frame handling\nI1118 06:29:11.665656 645 log.go:181] (0x4000e10320) (3) Data frame sent\nI1118 06:29:11.665743 645 log.go:181] (0x40005ca000) Data frame received for 5\nI1118 06:29:11.665850 645 log.go:181] (0x4000e103c0) (5) Data frame handling\nI1118 06:29:11.665960 645 log.go:181] (0x4000e103c0) (5) Data frame sent\n+ echo\n+ curl -q -sI1118 06:29:11.666348 645 log.go:181] (0x40005ca000) Data frame received for 5\nI1118 06:29:11.666418 645 log.go:181] (0x4000e103c0) (5) Data frame handling\n --connect-timeout 2 http://10.103.235.182:80/\nI1118 06:29:11.666522 645 log.go:181] (0x4000e103c0) (5) Data frame sent\nI1118 06:29:11.671769 645 log.go:181] (0x40005ca000) Data frame received for 3\nI1118 06:29:11.671857 645 log.go:181] (0x4000e10320) (3) Data frame handling\nI1118 06:29:11.671948 645 log.go:181] (0x4000e10320) (3) Data frame sent\nI1118 06:29:11.672025 645 log.go:181] (0x40005ca000) Data frame received for 3\nI1118 06:29:11.672095 645 log.go:181] (0x4000e10320) (3) Data frame handling\nI1118 06:29:11.672157 645 log.go:181] (0x4000e10320) (3) Data frame sent\nI1118 06:29:11.672210 645 log.go:181] (0x40005ca000) Data frame received for 5\nI1118 06:29:11.672275 645 log.go:181] (0x4000e103c0) (5) Data frame handling\nI1118 06:29:11.672347 645 log.go:181] (0x4000e103c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.235.182:80/\nI1118 06:29:11.677691 645 log.go:181] (0x40005ca000) Data frame received for 3\nI1118 06:29:11.677754 645 log.go:181] (0x4000e10320) (3) Data frame handling\nI1118 06:29:11.677830 645 log.go:181] (0x4000e10320) (3) Data frame sent\nI1118 06:29:11.678273 645 log.go:181] (0x40005ca000) Data frame received for 5\nI1118 06:29:11.678360 645 log.go:181] (0x4000e103c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.235.182:80/\nI1118 06:29:11.678476 645 log.go:181] (0x40005ca000) Data frame received for 3\nI1118 06:29:11.678578 645 log.go:181] (0x4000e10320) (3) Data frame handling\nI1118 06:29:11.678652 645 log.go:181] (0x4000e10320) (3) Data frame sent\nI1118 06:29:11.678717 645 log.go:181] (0x4000e103c0) (5) Data frame sent\nI1118 06:29:11.682231 645 log.go:181] (0x40005ca000) Data frame received for 3\nI1118 06:29:11.682323 645 log.go:181] (0x4000e10320) (3) Data frame handling\nI1118 06:29:11.682429 645 log.go:181] (0x4000e10320) (3) Data frame sent\nI1118 06:29:11.682639 645 log.go:181] (0x40005ca000) Data frame received for 5\nI1118 06:29:11.682716 645 log.go:181] (0x4000e103c0) (5) Data frame handling\nI1118 06:29:11.682780 645 log.go:181] (0x4000e103c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.235.182:80/\nI1118 06:29:11.682848 645 log.go:181] (0x40005ca000) Data frame received for 3\nI1118 06:29:11.682898 645 log.go:181] (0x4000e10320) (3) Data frame handling\nI1118 06:29:11.682957 645 log.go:181] (0x4000e10320) (3) Data frame sent\nI1118 06:29:11.687672 645 log.go:181] (0x40005ca000) Data frame received for 3\nI1118 06:29:11.687746 645 log.go:181] (0x4000e10320) (3) Data frame handling\nI1118 06:29:11.687843 645 log.go:181] (0x4000e10320) (3) Data frame sent\nI1118 06:29:11.688184 645 log.go:181] (0x40005ca000) Data frame received for 5\nI1118 06:29:11.688297 645 log.go:181] (0x4000e103c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.235.182:80/I1118 06:29:11.688404 645 log.go:181] (0x40005ca000) Data frame received for 3\nI1118 06:29:11.688498 645 log.go:181] (0x4000e10320) (3) Data frame handling\nI1118 06:29:11.688581 645 log.go:181] (0x4000e10320) (3) Data frame sent\nI1118 06:29:11.688672 645 log.go:181] (0x4000e103c0) (5) Data frame sent\nI1118 06:29:11.688746 645 log.go:181] (0x40005ca000) Data frame received for 5\nI1118 06:29:11.688824 645 log.go:181] (0x4000e103c0) (5) Data frame handling\nI1118 06:29:11.689026 645 log.go:181] (0x4000e103c0) (5) Data frame sent\n\nI1118 06:29:11.692436 645 log.go:181] (0x40005ca000) Data frame received for 3\nI1118 06:29:11.692504 645 log.go:181] (0x4000e10320) (3) Data frame handling\nI1118 06:29:11.692576 645 log.go:181] (0x4000e10320) (3) Data frame sent\nI1118 06:29:11.693349 645 log.go:181] (0x40005ca000) Data frame received for 3\nI1118 06:29:11.693441 645 log.go:181] (0x4000e10320) (3) Data frame handling\nI1118 06:29:11.693550 645 log.go:181] (0x4000e10320) (3) Data frame sent\nI1118 06:29:11.693658 645 log.go:181] (0x40005ca000) Data frame received for 5\nI1118 06:29:11.693734 645 log.go:181] (0x4000e103c0) (5) Data frame handling\nI1118 06:29:11.693821 645 log.go:181] (0x4000e103c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.235.182:80/\nI1118 06:29:11.698882 645 log.go:181] (0x40005ca000) Data frame received for 5\nI1118 06:29:11.698998 645 log.go:181] (0x4000e103c0) (5) Data frame handling\nI1118 06:29:11.699107 645 log.go:181] (0x4000e103c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.235.182:80/\nI1118 06:29:11.699275 645 log.go:181] (0x40005ca000) Data frame received for 3\nI1118 06:29:11.699380 645 log.go:181] (0x4000e10320) (3) Data frame handling\nI1118 06:29:11.699483 645 log.go:181] (0x4000e10320) (3) Data frame sent\nI1118 06:29:11.699580 645 log.go:181] (0x40005ca000) Data frame received for 3\nI1118 06:29:11.699659 645 log.go:181] (0x4000e10320) (3) Data frame handling\nI1118 06:29:11.700637 645 log.go:181] (0x4000e10320) (3) Data frame sent\nI1118 06:29:11.704228 645 log.go:181] (0x40005ca000) Data frame received for 3\nI1118 06:29:11.704369 645 log.go:181] (0x4000e10320) (3) Data frame handling\nI1118 06:29:11.704518 645 log.go:181] (0x4000e10320) (3) Data frame sent\nI1118 06:29:11.704790 645 log.go:181] (0x40005ca000) Data frame received for 3\nI1118 06:29:11.704975 645 log.go:181] (0x4000e10320) (3) Data frame handling\nI1118 06:29:11.705121 645 log.go:181] (0x40005ca000) Data frame received for 5\nI1118 06:29:11.705203 645 log.go:181] (0x4000e103c0) (5) Data frame handling\nI1118 06:29:11.706451 645 log.go:181] (0x40005ca000) Data frame received for 1\nI1118 06:29:11.706520 645 log.go:181] (0x4000bd40a0) (1) Data frame handling\nI1118 06:29:11.706595 645 log.go:181] (0x4000bd40a0) (1) Data frame sent\nI1118 06:29:11.707515 645 log.go:181] (0x40005ca000) (0x4000bd40a0) Stream removed, broadcasting: 1\nI1118 06:29:11.711171 645 log.go:181] (0x40005ca000) (0x4000bd40a0) Stream removed, broadcasting: 1\nI1118 06:29:11.711547 645 log.go:181] (0x40005ca000) (0x4000e10320) Stream removed, broadcasting: 3\nI1118 06:29:11.713128 645 log.go:181] (0x40005ca000) (0x4000e103c0) Stream removed, broadcasting: 5\n" Nov 18 06:29:11.727: INFO: stdout: "\naffinity-clusterip-timeout-sj8l7\naffinity-clusterip-timeout-sj8l7\naffinity-clusterip-timeout-sj8l7\naffinity-clusterip-timeout-sj8l7\naffinity-clusterip-timeout-sj8l7\naffinity-clusterip-timeout-sj8l7\naffinity-clusterip-timeout-sj8l7\naffinity-clusterip-timeout-sj8l7\naffinity-clusterip-timeout-sj8l7\naffinity-clusterip-timeout-sj8l7\naffinity-clusterip-timeout-sj8l7\naffinity-clusterip-timeout-sj8l7\naffinity-clusterip-timeout-sj8l7\naffinity-clusterip-timeout-sj8l7\naffinity-clusterip-timeout-sj8l7\naffinity-clusterip-timeout-sj8l7" Nov 18 06:29:11.727: INFO: Received response from host: affinity-clusterip-timeout-sj8l7 Nov 18 06:29:11.727: INFO: Received response from host: affinity-clusterip-timeout-sj8l7 Nov 18 06:29:11.727: INFO: Received response from host: affinity-clusterip-timeout-sj8l7 Nov 18 06:29:11.727: INFO: Received response from host: affinity-clusterip-timeout-sj8l7 Nov 18 06:29:11.727: INFO: Received response from host: affinity-clusterip-timeout-sj8l7 Nov 18 06:29:11.727: INFO: Received response from host: affinity-clusterip-timeout-sj8l7 Nov 18 06:29:11.727: INFO: Received response from host: affinity-clusterip-timeout-sj8l7 Nov 18 06:29:11.727: INFO: Received response from host: affinity-clusterip-timeout-sj8l7 Nov 18 06:29:11.727: INFO: Received response from host: affinity-clusterip-timeout-sj8l7 Nov 18 06:29:11.727: INFO: Received response from host: affinity-clusterip-timeout-sj8l7 Nov 18 06:29:11.727: INFO: Received response from host: affinity-clusterip-timeout-sj8l7 Nov 18 06:29:11.727: INFO: Received response from host: affinity-clusterip-timeout-sj8l7 Nov 18 06:29:11.727: INFO: Received response from host: affinity-clusterip-timeout-sj8l7 Nov 18 06:29:11.727: INFO: Received response from host: affinity-clusterip-timeout-sj8l7 Nov 18 06:29:11.727: INFO: Received response from host: affinity-clusterip-timeout-sj8l7 Nov 18 06:29:11.728: INFO: Received response from host: affinity-clusterip-timeout-sj8l7 Nov 18 06:29:11.728: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-9359 execpod-affinityj57k8 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.103.235.182:80/' Nov 18 06:29:13.303: INFO: stderr: "I1118 06:29:13.172717 665 log.go:181] (0x4000b460b0) (0x4000612140) Create stream\nI1118 06:29:13.178456 665 log.go:181] (0x4000b460b0) (0x4000612140) Stream added, broadcasting: 1\nI1118 06:29:13.189216 665 log.go:181] (0x4000b460b0) Reply frame received for 1\nI1118 06:29:13.190064 665 log.go:181] (0x4000b460b0) (0x4000378320) Create stream\nI1118 06:29:13.190130 665 log.go:181] (0x4000b460b0) (0x4000378320) Stream added, broadcasting: 3\nI1118 06:29:13.191478 665 log.go:181] (0x4000b460b0) Reply frame received for 3\nI1118 06:29:13.191715 665 log.go:181] (0x4000b460b0) (0x40003783c0) Create stream\nI1118 06:29:13.191796 665 log.go:181] (0x4000b460b0) (0x40003783c0) Stream added, broadcasting: 5\nI1118 06:29:13.193143 665 log.go:181] (0x4000b460b0) Reply frame received for 5\nI1118 06:29:13.278057 665 log.go:181] (0x4000b460b0) Data frame received for 5\nI1118 06:29:13.278401 665 log.go:181] (0x40003783c0) (5) Data frame handling\nI1118 06:29:13.279428 665 log.go:181] (0x40003783c0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.103.235.182:80/\nI1118 06:29:13.283438 665 log.go:181] (0x4000b460b0) Data frame received for 3\nI1118 06:29:13.283629 665 log.go:181] (0x4000378320) (3) Data frame handling\nI1118 06:29:13.283815 665 log.go:181] (0x4000378320) (3) Data frame sent\nI1118 06:29:13.284103 665 log.go:181] (0x4000b460b0) Data frame received for 5\nI1118 06:29:13.284241 665 log.go:181] (0x4000b460b0) Data frame received for 3\nI1118 06:29:13.284398 665 log.go:181] (0x4000378320) (3) Data frame handling\nI1118 06:29:13.284555 665 log.go:181] (0x40003783c0) (5) Data frame handling\nI1118 06:29:13.286022 665 log.go:181] (0x4000b460b0) Data frame received for 1\nI1118 06:29:13.286137 665 log.go:181] (0x4000612140) (1) Data frame handling\nI1118 06:29:13.286296 665 log.go:181] (0x4000612140) (1) Data frame sent\nI1118 06:29:13.287356 665 log.go:181] (0x4000b460b0) (0x4000612140) Stream removed, broadcasting: 1\nI1118 06:29:13.289980 665 log.go:181] (0x4000b460b0) Go away received\nI1118 06:29:13.293805 665 log.go:181] (0x4000b460b0) (0x4000612140) Stream removed, broadcasting: 1\nI1118 06:29:13.294092 665 log.go:181] (0x4000b460b0) (0x4000378320) Stream removed, broadcasting: 3\nI1118 06:29:13.294287 665 log.go:181] (0x4000b460b0) (0x40003783c0) Stream removed, broadcasting: 5\n" Nov 18 06:29:13.304: INFO: stdout: "affinity-clusterip-timeout-sj8l7" Nov 18 06:29:28.305: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-9359 execpod-affinityj57k8 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.103.235.182:80/' Nov 18 06:29:30.099: INFO: stderr: "I1118 06:29:29.972030 685 log.go:181] (0x400003a0b0) (0x4000899040) Create stream\nI1118 06:29:29.977225 685 log.go:181] (0x400003a0b0) (0x4000899040) Stream added, broadcasting: 1\nI1118 06:29:29.991096 685 log.go:181] (0x400003a0b0) Reply frame received for 1\nI1118 06:29:29.991586 685 log.go:181] (0x400003a0b0) (0x400071c000) Create stream\nI1118 06:29:29.991636 685 log.go:181] (0x400003a0b0) (0x400071c000) Stream added, broadcasting: 3\nI1118 06:29:29.993415 685 log.go:181] (0x400003a0b0) Reply frame received for 3\nI1118 06:29:29.993908 685 log.go:181] (0x400003a0b0) (0x400071c0a0) Create stream\nI1118 06:29:29.994025 685 log.go:181] (0x400003a0b0) (0x400071c0a0) Stream added, broadcasting: 5\nI1118 06:29:29.995569 685 log.go:181] (0x400003a0b0) Reply frame received for 5\nI1118 06:29:30.072192 685 log.go:181] (0x400003a0b0) Data frame received for 5\nI1118 06:29:30.072572 685 log.go:181] (0x400071c0a0) (5) Data frame handling\nI1118 06:29:30.073270 685 log.go:181] (0x400071c0a0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.103.235.182:80/\nI1118 06:29:30.076735 685 log.go:181] (0x400003a0b0) Data frame received for 3\nI1118 06:29:30.076980 685 log.go:181] (0x400071c000) (3) Data frame handling\nI1118 06:29:30.077151 685 log.go:181] (0x400071c000) (3) Data frame sent\nI1118 06:29:30.077763 685 log.go:181] (0x400003a0b0) Data frame received for 3\nI1118 06:29:30.077912 685 log.go:181] (0x400071c000) (3) Data frame handling\nI1118 06:29:30.078389 685 log.go:181] (0x400003a0b0) Data frame received for 5\nI1118 06:29:30.078553 685 log.go:181] (0x400071c0a0) (5) Data frame handling\nI1118 06:29:30.079633 685 log.go:181] (0x400003a0b0) Data frame received for 1\nI1118 06:29:30.079712 685 log.go:181] (0x4000899040) (1) Data frame handling\nI1118 06:29:30.079836 685 log.go:181] (0x4000899040) (1) Data frame sent\nI1118 06:29:30.081223 685 log.go:181] (0x400003a0b0) (0x4000899040) Stream removed, broadcasting: 1\nI1118 06:29:30.083054 685 log.go:181] (0x400003a0b0) Go away received\nI1118 06:29:30.086693 685 log.go:181] (0x400003a0b0) (0x4000899040) Stream removed, broadcasting: 1\nI1118 06:29:30.086945 685 log.go:181] (0x400003a0b0) (0x400071c000) Stream removed, broadcasting: 3\nI1118 06:29:30.087106 685 log.go:181] (0x400003a0b0) (0x400071c0a0) Stream removed, broadcasting: 5\n" Nov 18 06:29:30.100: INFO: stdout: "affinity-clusterip-timeout-lvn9l" Nov 18 06:29:30.100: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-9359, will wait for the garbage collector to delete the pods Nov 18 06:29:30.217: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 15.004943ms Nov 18 06:29:30.717: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 500.622948ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:29:40.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9359" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:68.195 seconds] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":303,"completed":34,"skipped":761,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:29:40.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should release no longer matching pods [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Nov 18 06:29:40.575: INFO: Pod name pod-release: Found 0 pods out of 1 Nov 18 06:29:45.600: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:29:46.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6591" for this suite. • [SLOW TEST:6.534 seconds] [sig-apps] ReplicationController /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":303,"completed":35,"skipped":786,"failed":0} SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:29:47.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Nov 18 06:29:48.108: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3ce169ab-3809-42f3-a9c9-a7bc09c242d6" in namespace "projected-2665" to be "Succeeded or Failed" Nov 18 06:29:48.214: INFO: Pod "downwardapi-volume-3ce169ab-3809-42f3-a9c9-a7bc09c242d6": Phase="Pending", Reason="", readiness=false. Elapsed: 105.634852ms Nov 18 06:29:50.234: INFO: Pod "downwardapi-volume-3ce169ab-3809-42f3-a9c9-a7bc09c242d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.126128273s Nov 18 06:29:52.308: INFO: Pod "downwardapi-volume-3ce169ab-3809-42f3-a9c9-a7bc09c242d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.200213285s STEP: Saw pod success Nov 18 06:29:52.308: INFO: Pod "downwardapi-volume-3ce169ab-3809-42f3-a9c9-a7bc09c242d6" satisfied condition "Succeeded or Failed" Nov 18 06:29:52.336: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-3ce169ab-3809-42f3-a9c9-a7bc09c242d6 container client-container: STEP: delete the pod Nov 18 06:29:52.379: INFO: Waiting for pod downwardapi-volume-3ce169ab-3809-42f3-a9c9-a7bc09c242d6 to disappear Nov 18 06:29:52.408: INFO: Pod downwardapi-volume-3ce169ab-3809-42f3-a9c9-a7bc09c242d6 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:29:52.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2665" for this suite. • [SLOW TEST:5.421 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":303,"completed":36,"skipped":791,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:29:52.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1711.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1711.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1711.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1711.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1711.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1711.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1711.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1711.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1711.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1711.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1711.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1711.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1711.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 7.120.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.120.7_udp@PTR;check="$$(dig +tcp +noall +answer +search 7.120.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.120.7_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1711.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1711.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1711.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1711.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1711.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1711.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1711.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1711.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1711.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1711.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1711.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1711.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1711.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 7.120.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.120.7_udp@PTR;check="$$(dig +tcp +noall +answer +search 7.120.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.120.7_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Nov 18 06:29:59.513: INFO: Unable to read wheezy_udp@dns-test-service.dns-1711.svc.cluster.local from pod dns-1711/dns-test-ba452597-260d-4874-918c-4aa814eaf076: the server could not find the requested resource (get pods dns-test-ba452597-260d-4874-918c-4aa814eaf076) Nov 18 06:29:59.518: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1711.svc.cluster.local from pod dns-1711/dns-test-ba452597-260d-4874-918c-4aa814eaf076: the server could not find the requested resource (get pods dns-test-ba452597-260d-4874-918c-4aa814eaf076) Nov 18 06:29:59.522: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1711.svc.cluster.local from pod dns-1711/dns-test-ba452597-260d-4874-918c-4aa814eaf076: the server could not find the requested resource (get pods dns-test-ba452597-260d-4874-918c-4aa814eaf076) Nov 18 06:29:59.526: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1711.svc.cluster.local from pod dns-1711/dns-test-ba452597-260d-4874-918c-4aa814eaf076: the server could not find the requested resource (get pods dns-test-ba452597-260d-4874-918c-4aa814eaf076) Nov 18 06:29:59.564: INFO: Unable to read jessie_udp@dns-test-service.dns-1711.svc.cluster.local from pod dns-1711/dns-test-ba452597-260d-4874-918c-4aa814eaf076: the server could not find the requested resource (get pods dns-test-ba452597-260d-4874-918c-4aa814eaf076) Nov 18 06:29:59.569: INFO: Unable to read jessie_tcp@dns-test-service.dns-1711.svc.cluster.local from pod dns-1711/dns-test-ba452597-260d-4874-918c-4aa814eaf076: the server could not find the requested resource (get pods dns-test-ba452597-260d-4874-918c-4aa814eaf076) Nov 18 06:29:59.574: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1711.svc.cluster.local from pod dns-1711/dns-test-ba452597-260d-4874-918c-4aa814eaf076: the server could not find the requested resource (get pods dns-test-ba452597-260d-4874-918c-4aa814eaf076) Nov 18 06:29:59.577: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1711.svc.cluster.local from pod dns-1711/dns-test-ba452597-260d-4874-918c-4aa814eaf076: the server could not find the requested resource (get pods dns-test-ba452597-260d-4874-918c-4aa814eaf076) Nov 18 06:29:59.601: INFO: Lookups using dns-1711/dns-test-ba452597-260d-4874-918c-4aa814eaf076 failed for: [wheezy_udp@dns-test-service.dns-1711.svc.cluster.local wheezy_tcp@dns-test-service.dns-1711.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1711.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1711.svc.cluster.local jessie_udp@dns-test-service.dns-1711.svc.cluster.local jessie_tcp@dns-test-service.dns-1711.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1711.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1711.svc.cluster.local] Nov 18 06:30:04.624: INFO: Unable to read wheezy_udp@dns-test-service.dns-1711.svc.cluster.local from pod dns-1711/dns-test-ba452597-260d-4874-918c-4aa814eaf076: the server could not find the requested resource (get pods dns-test-ba452597-260d-4874-918c-4aa814eaf076) Nov 18 06:30:04.639: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1711.svc.cluster.local from pod dns-1711/dns-test-ba452597-260d-4874-918c-4aa814eaf076: the server could not find the requested resource (get pods dns-test-ba452597-260d-4874-918c-4aa814eaf076) Nov 18 06:30:04.644: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1711.svc.cluster.local from pod dns-1711/dns-test-ba452597-260d-4874-918c-4aa814eaf076: the server could not find the requested resource (get pods dns-test-ba452597-260d-4874-918c-4aa814eaf076) Nov 18 06:30:04.648: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1711.svc.cluster.local from pod dns-1711/dns-test-ba452597-260d-4874-918c-4aa814eaf076: the server could not find the requested resource (get pods dns-test-ba452597-260d-4874-918c-4aa814eaf076) Nov 18 06:30:04.678: INFO: Unable to read jessie_udp@dns-test-service.dns-1711.svc.cluster.local from pod dns-1711/dns-test-ba452597-260d-4874-918c-4aa814eaf076: the server could not find the requested resource (get pods dns-test-ba452597-260d-4874-918c-4aa814eaf076) Nov 18 06:30:04.681: INFO: Unable to read jessie_tcp@dns-test-service.dns-1711.svc.cluster.local from pod dns-1711/dns-test-ba452597-260d-4874-918c-4aa814eaf076: the server could not find the requested resource (get pods dns-test-ba452597-260d-4874-918c-4aa814eaf076) Nov 18 06:30:04.685: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1711.svc.cluster.local from pod dns-1711/dns-test-ba452597-260d-4874-918c-4aa814eaf076: the server could not find the requested resource (get pods dns-test-ba452597-260d-4874-918c-4aa814eaf076) Nov 18 06:30:04.689: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1711.svc.cluster.local from pod dns-1711/dns-test-ba452597-260d-4874-918c-4aa814eaf076: the server could not find the requested resource (get pods dns-test-ba452597-260d-4874-918c-4aa814eaf076) Nov 18 06:30:04.734: INFO: Lookups using dns-1711/dns-test-ba452597-260d-4874-918c-4aa814eaf076 failed for: [wheezy_udp@dns-test-service.dns-1711.svc.cluster.local wheezy_tcp@dns-test-service.dns-1711.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1711.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1711.svc.cluster.local jessie_udp@dns-test-service.dns-1711.svc.cluster.local jessie_tcp@dns-test-service.dns-1711.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1711.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1711.svc.cluster.local] Nov 18 06:30:09.613: INFO: Unable to read wheezy_udp@dns-test-service.dns-1711.svc.cluster.local from pod dns-1711/dns-test-ba452597-260d-4874-918c-4aa814eaf076: the server could not find the requested resource (get pods dns-test-ba452597-260d-4874-918c-4aa814eaf076) Nov 18 06:30:09.620: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1711.svc.cluster.local from pod dns-1711/dns-test-ba452597-260d-4874-918c-4aa814eaf076: the server could not find the requested resource (get pods dns-test-ba452597-260d-4874-918c-4aa814eaf076) Nov 18 06:30:09.624: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1711.svc.cluster.local from pod dns-1711/dns-test-ba452597-260d-4874-918c-4aa814eaf076: the server could not find the requested resource (get pods dns-test-ba452597-260d-4874-918c-4aa814eaf076) Nov 18 06:30:09.627: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1711.svc.cluster.local from pod dns-1711/dns-test-ba452597-260d-4874-918c-4aa814eaf076: the server could not find the requested resource (get pods dns-test-ba452597-260d-4874-918c-4aa814eaf076) Nov 18 06:30:09.649: INFO: Unable to read jessie_udp@dns-test-service.dns-1711.svc.cluster.local from pod dns-1711/dns-test-ba452597-260d-4874-918c-4aa814eaf076: the server could not find the requested resource (get pods dns-test-ba452597-260d-4874-918c-4aa814eaf076) Nov 18 06:30:09.653: INFO: Unable to read jessie_tcp@dns-test-service.dns-1711.svc.cluster.local from pod dns-1711/dns-test-ba452597-260d-4874-918c-4aa814eaf076: the server could not find the requested resource (get pods dns-test-ba452597-260d-4874-918c-4aa814eaf076) Nov 18 06:30:09.657: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1711.svc.cluster.local from pod dns-1711/dns-test-ba452597-260d-4874-918c-4aa814eaf076: the server could not find the requested resource (get pods dns-test-ba452597-260d-4874-918c-4aa814eaf076) Nov 18 06:30:09.661: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1711.svc.cluster.local from pod dns-1711/dns-test-ba452597-260d-4874-918c-4aa814eaf076: the server could not find the requested resource (get pods dns-test-ba452597-260d-4874-918c-4aa814eaf076) Nov 18 06:30:09.685: INFO: Lookups using dns-1711/dns-test-ba452597-260d-4874-918c-4aa814eaf076 failed for: [wheezy_udp@dns-test-service.dns-1711.svc.cluster.local wheezy_tcp@dns-test-service.dns-1711.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1711.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1711.svc.cluster.local jessie_udp@dns-test-service.dns-1711.svc.cluster.local jessie_tcp@dns-test-service.dns-1711.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1711.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1711.svc.cluster.local] Nov 18 06:30:14.715: INFO: Unable to read wheezy_udp@dns-test-service.dns-1711.svc.cluster.local from pod dns-1711/dns-test-ba452597-260d-4874-918c-4aa814eaf076: the server could not find the requested resource (get pods dns-test-ba452597-260d-4874-918c-4aa814eaf076) Nov 18 06:30:14.721: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1711.svc.cluster.local from pod dns-1711/dns-test-ba452597-260d-4874-918c-4aa814eaf076: the server could not find the requested resource (get pods dns-test-ba452597-260d-4874-918c-4aa814eaf076) Nov 18 06:30:14.726: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1711.svc.cluster.local from pod dns-1711/dns-test-ba452597-260d-4874-918c-4aa814eaf076: the server could not find the requested resource (get pods dns-test-ba452597-260d-4874-918c-4aa814eaf076) Nov 18 06:30:14.730: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1711.svc.cluster.local from pod dns-1711/dns-test-ba452597-260d-4874-918c-4aa814eaf076: the server could not find the requested resource (get pods dns-test-ba452597-260d-4874-918c-4aa814eaf076) Nov 18 06:30:14.827: INFO: Unable to read jessie_udp@dns-test-service.dns-1711.svc.cluster.local from pod dns-1711/dns-test-ba452597-260d-4874-918c-4aa814eaf076: the server could not find the requested resource (get pods dns-test-ba452597-260d-4874-918c-4aa814eaf076) Nov 18 06:30:14.831: INFO: Unable to read jessie_tcp@dns-test-service.dns-1711.svc.cluster.local from pod dns-1711/dns-test-ba452597-260d-4874-918c-4aa814eaf076: the server could not find the requested resource (get pods dns-test-ba452597-260d-4874-918c-4aa814eaf076) Nov 18 06:30:14.834: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1711.svc.cluster.local from pod dns-1711/dns-test-ba452597-260d-4874-918c-4aa814eaf076: the server could not find the requested resource (get pods dns-test-ba452597-260d-4874-918c-4aa814eaf076) Nov 18 06:30:14.837: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1711.svc.cluster.local from pod dns-1711/dns-test-ba452597-260d-4874-918c-4aa814eaf076: the server could not find the requested resource (get pods dns-test-ba452597-260d-4874-918c-4aa814eaf076) Nov 18 06:30:14.856: INFO: Lookups using dns-1711/dns-test-ba452597-260d-4874-918c-4aa814eaf076 failed for: [wheezy_udp@dns-test-service.dns-1711.svc.cluster.local wheezy_tcp@dns-test-service.dns-1711.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1711.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1711.svc.cluster.local jessie_udp@dns-test-service.dns-1711.svc.cluster.local jessie_tcp@dns-test-service.dns-1711.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1711.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1711.svc.cluster.local] Nov 18 06:30:19.608: INFO: Unable to read wheezy_udp@dns-test-service.dns-1711.svc.cluster.local from pod dns-1711/dns-test-ba452597-260d-4874-918c-4aa814eaf076: the server could not find the requested resource (get pods dns-test-ba452597-260d-4874-918c-4aa814eaf076) Nov 18 06:30:19.612: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1711.svc.cluster.local from pod dns-1711/dns-test-ba452597-260d-4874-918c-4aa814eaf076: the server could not find the requested resource (get pods dns-test-ba452597-260d-4874-918c-4aa814eaf076) Nov 18 06:30:19.617: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1711.svc.cluster.local from pod dns-1711/dns-test-ba452597-260d-4874-918c-4aa814eaf076: the server could not find the requested resource (get pods dns-test-ba452597-260d-4874-918c-4aa814eaf076) Nov 18 06:30:19.620: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1711.svc.cluster.local from pod dns-1711/dns-test-ba452597-260d-4874-918c-4aa814eaf076: the server could not find the requested resource (get pods dns-test-ba452597-260d-4874-918c-4aa814eaf076) Nov 18 06:30:19.646: INFO: Unable to read jessie_udp@dns-test-service.dns-1711.svc.cluster.local from pod dns-1711/dns-test-ba452597-260d-4874-918c-4aa814eaf076: the server could not find the requested resource (get pods dns-test-ba452597-260d-4874-918c-4aa814eaf076) Nov 18 06:30:19.650: INFO: Unable to read jessie_tcp@dns-test-service.dns-1711.svc.cluster.local from pod dns-1711/dns-test-ba452597-260d-4874-918c-4aa814eaf076: the server could not find the requested resource (get pods dns-test-ba452597-260d-4874-918c-4aa814eaf076) Nov 18 06:30:19.654: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1711.svc.cluster.local from pod dns-1711/dns-test-ba452597-260d-4874-918c-4aa814eaf076: the server could not find the requested resource (get pods dns-test-ba452597-260d-4874-918c-4aa814eaf076) Nov 18 06:30:19.658: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1711.svc.cluster.local from pod dns-1711/dns-test-ba452597-260d-4874-918c-4aa814eaf076: the server could not find the requested resource (get pods dns-test-ba452597-260d-4874-918c-4aa814eaf076) Nov 18 06:30:19.681: INFO: Lookups using dns-1711/dns-test-ba452597-260d-4874-918c-4aa814eaf076 failed for: [wheezy_udp@dns-test-service.dns-1711.svc.cluster.local wheezy_tcp@dns-test-service.dns-1711.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1711.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1711.svc.cluster.local jessie_udp@dns-test-service.dns-1711.svc.cluster.local jessie_tcp@dns-test-service.dns-1711.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1711.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1711.svc.cluster.local] Nov 18 06:30:24.608: INFO: Unable to read wheezy_udp@dns-test-service.dns-1711.svc.cluster.local from pod dns-1711/dns-test-ba452597-260d-4874-918c-4aa814eaf076: the server could not find the requested resource (get pods dns-test-ba452597-260d-4874-918c-4aa814eaf076) Nov 18 06:30:24.612: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1711.svc.cluster.local from pod dns-1711/dns-test-ba452597-260d-4874-918c-4aa814eaf076: the server could not find the requested resource (get pods dns-test-ba452597-260d-4874-918c-4aa814eaf076) Nov 18 06:30:24.623: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1711.svc.cluster.local from pod dns-1711/dns-test-ba452597-260d-4874-918c-4aa814eaf076: the server could not find the requested resource (get pods dns-test-ba452597-260d-4874-918c-4aa814eaf076) Nov 18 06:30:24.631: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1711.svc.cluster.local from pod dns-1711/dns-test-ba452597-260d-4874-918c-4aa814eaf076: the server could not find the requested resource (get pods dns-test-ba452597-260d-4874-918c-4aa814eaf076) Nov 18 06:30:24.655: INFO: Unable to read jessie_udp@dns-test-service.dns-1711.svc.cluster.local from pod dns-1711/dns-test-ba452597-260d-4874-918c-4aa814eaf076: the server could not find the requested resource (get pods dns-test-ba452597-260d-4874-918c-4aa814eaf076) Nov 18 06:30:24.658: INFO: Unable to read jessie_tcp@dns-test-service.dns-1711.svc.cluster.local from pod dns-1711/dns-test-ba452597-260d-4874-918c-4aa814eaf076: the server could not find the requested resource (get pods dns-test-ba452597-260d-4874-918c-4aa814eaf076) Nov 18 06:30:24.662: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1711.svc.cluster.local from pod dns-1711/dns-test-ba452597-260d-4874-918c-4aa814eaf076: the server could not find the requested resource (get pods dns-test-ba452597-260d-4874-918c-4aa814eaf076) Nov 18 06:30:24.665: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1711.svc.cluster.local from pod dns-1711/dns-test-ba452597-260d-4874-918c-4aa814eaf076: the server could not find the requested resource (get pods dns-test-ba452597-260d-4874-918c-4aa814eaf076) Nov 18 06:30:24.686: INFO: Lookups using dns-1711/dns-test-ba452597-260d-4874-918c-4aa814eaf076 failed for: [wheezy_udp@dns-test-service.dns-1711.svc.cluster.local wheezy_tcp@dns-test-service.dns-1711.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1711.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1711.svc.cluster.local jessie_udp@dns-test-service.dns-1711.svc.cluster.local jessie_tcp@dns-test-service.dns-1711.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1711.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1711.svc.cluster.local] Nov 18 06:30:29.677: INFO: DNS probes using dns-1711/dns-test-ba452597-260d-4874-918c-4aa814eaf076 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:30:30.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1711" for this suite. • [SLOW TEST:38.261 seconds] [sig-network] DNS /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":303,"completed":37,"skipped":814,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:30:30.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium Nov 18 06:30:30.811: INFO: Waiting up to 5m0s for pod "pod-343f17e5-ab91-4217-9c0d-9195f46ea7fa" in namespace "emptydir-5892" to be "Succeeded or Failed" Nov 18 06:30:30.843: INFO: Pod "pod-343f17e5-ab91-4217-9c0d-9195f46ea7fa": Phase="Pending", Reason="", readiness=false. Elapsed: 31.779229ms Nov 18 06:30:32.875: INFO: Pod "pod-343f17e5-ab91-4217-9c0d-9195f46ea7fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063769283s Nov 18 06:30:34.884: INFO: Pod "pod-343f17e5-ab91-4217-9c0d-9195f46ea7fa": Phase="Running", Reason="", readiness=true. Elapsed: 4.072645161s Nov 18 06:30:36.892: INFO: Pod "pod-343f17e5-ab91-4217-9c0d-9195f46ea7fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.08045986s STEP: Saw pod success Nov 18 06:30:36.892: INFO: Pod "pod-343f17e5-ab91-4217-9c0d-9195f46ea7fa" satisfied condition "Succeeded or Failed" Nov 18 06:30:36.898: INFO: Trying to get logs from node leguer-worker pod pod-343f17e5-ab91-4217-9c0d-9195f46ea7fa container test-container: STEP: delete the pod Nov 18 06:30:36.962: INFO: Waiting for pod pod-343f17e5-ab91-4217-9c0d-9195f46ea7fa to disappear Nov 18 06:30:36.970: INFO: Pod pod-343f17e5-ab91-4217-9c0d-9195f46ea7fa no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:30:36.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5892" for this suite. • [SLOW TEST:6.307 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":38,"skipped":823,"failed":0} [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:30:36.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Nov 18 06:30:37.163: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 06:30:37.170: INFO: Number of nodes with available pods: 0 Nov 18 06:30:37.171: INFO: Node leguer-worker is running more than one daemon pod Nov 18 06:30:38.185: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 06:30:38.197: INFO: Number of nodes with available pods: 0 Nov 18 06:30:38.197: INFO: Node leguer-worker is running more than one daemon pod Nov 18 06:30:39.317: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 06:30:39.505: INFO: Number of nodes with available pods: 0 Nov 18 06:30:39.505: INFO: Node leguer-worker is running more than one daemon pod Nov 18 06:30:40.182: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 06:30:40.189: INFO: Number of nodes with available pods: 0 Nov 18 06:30:40.189: INFO: Node leguer-worker is running more than one daemon pod Nov 18 06:30:41.182: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 06:30:41.188: INFO: Number of nodes with available pods: 1 Nov 18 06:30:41.189: INFO: Node leguer-worker2 is running more than one daemon pod Nov 18 06:30:42.183: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 06:30:42.190: INFO: Number of nodes with available pods: 2 Nov 18 06:30:42.191: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Nov 18 06:30:42.271: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 06:30:42.298: INFO: Number of nodes with available pods: 1 Nov 18 06:30:42.299: INFO: Node leguer-worker is running more than one daemon pod Nov 18 06:30:43.312: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 06:30:43.319: INFO: Number of nodes with available pods: 1 Nov 18 06:30:43.319: INFO: Node leguer-worker is running more than one daemon pod Nov 18 06:30:44.311: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 06:30:44.320: INFO: Number of nodes with available pods: 1 Nov 18 06:30:44.320: INFO: Node leguer-worker is running more than one daemon pod Nov 18 06:30:45.311: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 06:30:45.320: INFO: Number of nodes with available pods: 1 Nov 18 06:30:45.320: INFO: Node leguer-worker is running more than one daemon pod Nov 18 06:30:46.312: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 06:30:46.319: INFO: Number of nodes with available pods: 2 Nov 18 06:30:46.319: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8807, will wait for the garbage collector to delete the pods Nov 18 06:30:46.396: INFO: Deleting DaemonSet.extensions daemon-set took: 8.765002ms Nov 18 06:30:46.597: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.962311ms Nov 18 06:30:59.604: INFO: Number of nodes with available pods: 0 Nov 18 06:30:59.604: INFO: Number of running nodes: 0, number of available pods: 0 Nov 18 06:30:59.626: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8807/daemonsets","resourceVersion":"11984045"},"items":null} Nov 18 06:30:59.634: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8807/pods","resourceVersion":"11984045"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:30:59.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8807" for this suite. • [SLOW TEST:22.675 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":303,"completed":39,"skipped":823,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:30:59.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 18 06:31:03.137: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 18 06:31:05.166: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741277863, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741277863, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741277863, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741277863, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 18 06:31:07.174: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741277863, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741277863, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741277863, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741277863, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 18 06:31:10.248: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:31:10.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3040" for this suite. STEP: Destroying namespace "webhook-3040-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.944 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":303,"completed":40,"skipped":834,"failed":0} SSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:31:10.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Nov 18 06:31:10.721: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 18 06:31:10.753: INFO: Waiting for terminating namespaces to be deleted... Nov 18 06:31:10.759: INFO: Logging pods the apiserver thinks is on node leguer-worker before test Nov 18 06:31:10.770: INFO: rally-ddb9e3e8-4an86tq3 from c-rally-ddb9e3e8-8y9kxlf5 started at 2020-11-18 06:31:02 +0000 UTC (1 container statuses recorded) Nov 18 06:31:10.770: INFO: Container rally-ddb9e3e8-4an86tq3 ready: true, restart count 0 Nov 18 06:31:10.770: INFO: kindnet-lc95n from kube-system started at 2020-10-04 09:51:31 +0000 UTC (1 container statuses recorded) Nov 18 06:31:10.770: INFO: Container kindnet-cni ready: true, restart count 1 Nov 18 06:31:10.770: INFO: kube-proxy-bmzvg from kube-system started at 2020-10-04 09:51:31 +0000 UTC (1 container statuses recorded) Nov 18 06:31:10.770: INFO: Container kube-proxy ready: true, restart count 0 Nov 18 06:31:10.770: INFO: sample-webhook-deployment-cbccbf6bb-j6ggr from webhook-3040 started at 2020-11-18 06:31:03 +0000 UTC (1 container statuses recorded) Nov 18 06:31:10.770: INFO: Container sample-webhook ready: true, restart count 0 Nov 18 06:31:10.770: INFO: Logging pods the apiserver thinks is on node leguer-worker2 before test Nov 18 06:31:10.779: INFO: rally-ddb9e3e8-4an86tq3-5gpgk from c-rally-ddb9e3e8-8y9kxlf5 started at 2020-11-18 06:31:08 +0000 UTC (1 container statuses recorded) Nov 18 06:31:10.779: INFO: Container rally-ddb9e3e8-4an86tq3 ready: false, restart count 0 Nov 18 06:31:10.779: INFO: kindnet-nffr7 from kube-system started at 2020-10-04 09:51:31 +0000 UTC (1 container statuses recorded) Nov 18 06:31:10.779: INFO: Container kindnet-cni ready: true, restart count 1 Nov 18 06:31:10.779: INFO: kube-proxy-sxhc5 from kube-system started at 2020-10-04 09:51:30 +0000 UTC (1 container statuses recorded) Nov 18 06:31:10.779: INFO: Container kube-proxy ready: true, restart count 0 Nov 18 06:31:10.779: INFO: webhook-to-be-mutated from webhook-3040 started at 2020-11-18 06:31:10 +0000 UTC (1 container statuses recorded) Nov 18 06:31:10.779: INFO: Container example ready: false, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-5342a146-cf39-48d5-a732-5bc61209fc59 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-5342a146-cf39-48d5-a732-5bc61209fc59 off the node leguer-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-5342a146-cf39-48d5-a732-5bc61209fc59 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:31:31.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5028" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:20.451 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":303,"completed":41,"skipped":838,"failed":0} SSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:31:31.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7100.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-7100.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7100.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7100.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-7100.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-7100.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-7100.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-7100.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7100.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7100.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-7100.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7100.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-7100.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-7100.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-7100.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-7100.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-7100.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7100.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Nov 18 06:31:37.344: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7100.svc.cluster.local from pod dns-7100/dns-test-67ea5486-e743-42d3-add8-abc8e6a52098: the server could not find the requested resource (get pods dns-test-67ea5486-e743-42d3-add8-abc8e6a52098) Nov 18 06:31:37.456: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7100.svc.cluster.local from pod dns-7100/dns-test-67ea5486-e743-42d3-add8-abc8e6a52098: the server could not find the requested resource (get pods dns-test-67ea5486-e743-42d3-add8-abc8e6a52098) Nov 18 06:31:37.467: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7100.svc.cluster.local from pod dns-7100/dns-test-67ea5486-e743-42d3-add8-abc8e6a52098: the server could not find the requested resource (get pods dns-test-67ea5486-e743-42d3-add8-abc8e6a52098) Nov 18 06:31:37.473: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7100.svc.cluster.local from pod dns-7100/dns-test-67ea5486-e743-42d3-add8-abc8e6a52098: the server could not find the requested resource (get pods dns-test-67ea5486-e743-42d3-add8-abc8e6a52098) Nov 18 06:31:37.527: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7100.svc.cluster.local from pod dns-7100/dns-test-67ea5486-e743-42d3-add8-abc8e6a52098: the server could not find the requested resource (get pods dns-test-67ea5486-e743-42d3-add8-abc8e6a52098) Nov 18 06:31:37.531: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7100.svc.cluster.local from pod dns-7100/dns-test-67ea5486-e743-42d3-add8-abc8e6a52098: the server could not find the requested resource (get pods dns-test-67ea5486-e743-42d3-add8-abc8e6a52098) Nov 18 06:31:37.536: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7100.svc.cluster.local from pod dns-7100/dns-test-67ea5486-e743-42d3-add8-abc8e6a52098: the server could not find the requested resource (get pods dns-test-67ea5486-e743-42d3-add8-abc8e6a52098) Nov 18 06:31:37.540: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7100.svc.cluster.local from pod dns-7100/dns-test-67ea5486-e743-42d3-add8-abc8e6a52098: the server could not find the requested resource (get pods dns-test-67ea5486-e743-42d3-add8-abc8e6a52098) Nov 18 06:31:37.547: INFO: Lookups using dns-7100/dns-test-67ea5486-e743-42d3-add8-abc8e6a52098 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7100.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7100.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7100.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7100.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7100.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7100.svc.cluster.local jessie_udp@dns-test-service-2.dns-7100.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7100.svc.cluster.local] Nov 18 06:31:42.554: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7100.svc.cluster.local from pod dns-7100/dns-test-67ea5486-e743-42d3-add8-abc8e6a52098: the server could not find the requested resource (get pods dns-test-67ea5486-e743-42d3-add8-abc8e6a52098) Nov 18 06:31:42.559: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7100.svc.cluster.local from pod dns-7100/dns-test-67ea5486-e743-42d3-add8-abc8e6a52098: the server could not find the requested resource (get pods dns-test-67ea5486-e743-42d3-add8-abc8e6a52098) Nov 18 06:31:42.564: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7100.svc.cluster.local from pod dns-7100/dns-test-67ea5486-e743-42d3-add8-abc8e6a52098: the server could not find the requested resource (get pods dns-test-67ea5486-e743-42d3-add8-abc8e6a52098) Nov 18 06:31:42.568: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7100.svc.cluster.local from pod dns-7100/dns-test-67ea5486-e743-42d3-add8-abc8e6a52098: the server could not find the requested resource (get pods dns-test-67ea5486-e743-42d3-add8-abc8e6a52098) Nov 18 06:31:42.582: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7100.svc.cluster.local from pod dns-7100/dns-test-67ea5486-e743-42d3-add8-abc8e6a52098: the server could not find the requested resource (get pods dns-test-67ea5486-e743-42d3-add8-abc8e6a52098) Nov 18 06:31:42.587: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7100.svc.cluster.local from pod dns-7100/dns-test-67ea5486-e743-42d3-add8-abc8e6a52098: the server could not find the requested resource (get pods dns-test-67ea5486-e743-42d3-add8-abc8e6a52098) Nov 18 06:31:42.591: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7100.svc.cluster.local from pod dns-7100/dns-test-67ea5486-e743-42d3-add8-abc8e6a52098: the server could not find the requested resource (get pods dns-test-67ea5486-e743-42d3-add8-abc8e6a52098) Nov 18 06:31:42.594: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7100.svc.cluster.local from pod dns-7100/dns-test-67ea5486-e743-42d3-add8-abc8e6a52098: the server could not find the requested resource (get pods dns-test-67ea5486-e743-42d3-add8-abc8e6a52098) Nov 18 06:31:42.600: INFO: Lookups using dns-7100/dns-test-67ea5486-e743-42d3-add8-abc8e6a52098 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7100.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7100.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7100.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7100.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7100.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7100.svc.cluster.local jessie_udp@dns-test-service-2.dns-7100.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7100.svc.cluster.local] Nov 18 06:31:47.554: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7100.svc.cluster.local from pod dns-7100/dns-test-67ea5486-e743-42d3-add8-abc8e6a52098: the server could not find the requested resource (get pods dns-test-67ea5486-e743-42d3-add8-abc8e6a52098) Nov 18 06:31:47.558: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7100.svc.cluster.local from pod dns-7100/dns-test-67ea5486-e743-42d3-add8-abc8e6a52098: the server could not find the requested resource (get pods dns-test-67ea5486-e743-42d3-add8-abc8e6a52098) Nov 18 06:31:47.562: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7100.svc.cluster.local from pod dns-7100/dns-test-67ea5486-e743-42d3-add8-abc8e6a52098: the server could not find the requested resource (get pods dns-test-67ea5486-e743-42d3-add8-abc8e6a52098) Nov 18 06:31:47.565: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7100.svc.cluster.local from pod dns-7100/dns-test-67ea5486-e743-42d3-add8-abc8e6a52098: the server could not find the requested resource (get pods dns-test-67ea5486-e743-42d3-add8-abc8e6a52098) Nov 18 06:31:47.578: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7100.svc.cluster.local from pod dns-7100/dns-test-67ea5486-e743-42d3-add8-abc8e6a52098: the server could not find the requested resource (get pods dns-test-67ea5486-e743-42d3-add8-abc8e6a52098) Nov 18 06:31:47.582: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7100.svc.cluster.local from pod dns-7100/dns-test-67ea5486-e743-42d3-add8-abc8e6a52098: the server could not find the requested resource (get pods dns-test-67ea5486-e743-42d3-add8-abc8e6a52098) Nov 18 06:31:47.585: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7100.svc.cluster.local from pod dns-7100/dns-test-67ea5486-e743-42d3-add8-abc8e6a52098: the server could not find the requested resource (get pods dns-test-67ea5486-e743-42d3-add8-abc8e6a52098) Nov 18 06:31:47.589: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7100.svc.cluster.local from pod dns-7100/dns-test-67ea5486-e743-42d3-add8-abc8e6a52098: the server could not find the requested resource (get pods dns-test-67ea5486-e743-42d3-add8-abc8e6a52098) Nov 18 06:31:47.595: INFO: Lookups using dns-7100/dns-test-67ea5486-e743-42d3-add8-abc8e6a52098 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7100.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7100.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7100.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7100.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7100.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7100.svc.cluster.local jessie_udp@dns-test-service-2.dns-7100.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7100.svc.cluster.local] Nov 18 06:31:52.560: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7100.svc.cluster.local from pod dns-7100/dns-test-67ea5486-e743-42d3-add8-abc8e6a52098: the server could not find the requested resource (get pods dns-test-67ea5486-e743-42d3-add8-abc8e6a52098) Nov 18 06:31:52.565: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7100.svc.cluster.local from pod dns-7100/dns-test-67ea5486-e743-42d3-add8-abc8e6a52098: the server could not find the requested resource (get pods dns-test-67ea5486-e743-42d3-add8-abc8e6a52098) Nov 18 06:31:52.618: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7100.svc.cluster.local from pod dns-7100/dns-test-67ea5486-e743-42d3-add8-abc8e6a52098: the server could not find the requested resource (get pods dns-test-67ea5486-e743-42d3-add8-abc8e6a52098) Nov 18 06:31:52.623: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7100.svc.cluster.local from pod dns-7100/dns-test-67ea5486-e743-42d3-add8-abc8e6a52098: the server could not find the requested resource (get pods dns-test-67ea5486-e743-42d3-add8-abc8e6a52098) Nov 18 06:31:52.636: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7100.svc.cluster.local from pod dns-7100/dns-test-67ea5486-e743-42d3-add8-abc8e6a52098: the server could not find the requested resource (get pods dns-test-67ea5486-e743-42d3-add8-abc8e6a52098) Nov 18 06:31:52.642: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7100.svc.cluster.local from pod dns-7100/dns-test-67ea5486-e743-42d3-add8-abc8e6a52098: the server could not find the requested resource (get pods dns-test-67ea5486-e743-42d3-add8-abc8e6a52098) Nov 18 06:31:52.646: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7100.svc.cluster.local from pod dns-7100/dns-test-67ea5486-e743-42d3-add8-abc8e6a52098: the server could not find the requested resource (get pods dns-test-67ea5486-e743-42d3-add8-abc8e6a52098) Nov 18 06:31:52.650: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7100.svc.cluster.local from pod dns-7100/dns-test-67ea5486-e743-42d3-add8-abc8e6a52098: the server could not find the requested resource (get pods dns-test-67ea5486-e743-42d3-add8-abc8e6a52098) Nov 18 06:31:52.657: INFO: Lookups using dns-7100/dns-test-67ea5486-e743-42d3-add8-abc8e6a52098 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7100.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7100.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7100.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7100.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7100.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7100.svc.cluster.local jessie_udp@dns-test-service-2.dns-7100.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7100.svc.cluster.local] Nov 18 06:31:57.555: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7100.svc.cluster.local from pod dns-7100/dns-test-67ea5486-e743-42d3-add8-abc8e6a52098: the server could not find the requested resource (get pods dns-test-67ea5486-e743-42d3-add8-abc8e6a52098) Nov 18 06:31:57.561: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7100.svc.cluster.local from pod dns-7100/dns-test-67ea5486-e743-42d3-add8-abc8e6a52098: the server could not find the requested resource (get pods dns-test-67ea5486-e743-42d3-add8-abc8e6a52098) Nov 18 06:31:57.566: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7100.svc.cluster.local from pod dns-7100/dns-test-67ea5486-e743-42d3-add8-abc8e6a52098: the server could not find the requested resource (get pods dns-test-67ea5486-e743-42d3-add8-abc8e6a52098) Nov 18 06:31:57.570: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7100.svc.cluster.local from pod dns-7100/dns-test-67ea5486-e743-42d3-add8-abc8e6a52098: the server could not find the requested resource (get pods dns-test-67ea5486-e743-42d3-add8-abc8e6a52098) Nov 18 06:31:57.582: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7100.svc.cluster.local from pod dns-7100/dns-test-67ea5486-e743-42d3-add8-abc8e6a52098: the server could not find the requested resource (get pods dns-test-67ea5486-e743-42d3-add8-abc8e6a52098) Nov 18 06:31:57.585: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7100.svc.cluster.local from pod dns-7100/dns-test-67ea5486-e743-42d3-add8-abc8e6a52098: the server could not find the requested resource (get pods dns-test-67ea5486-e743-42d3-add8-abc8e6a52098) Nov 18 06:31:57.589: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7100.svc.cluster.local from pod dns-7100/dns-test-67ea5486-e743-42d3-add8-abc8e6a52098: the server could not find the requested resource (get pods dns-test-67ea5486-e743-42d3-add8-abc8e6a52098) Nov 18 06:31:57.593: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7100.svc.cluster.local from pod dns-7100/dns-test-67ea5486-e743-42d3-add8-abc8e6a52098: the server could not find the requested resource (get pods dns-test-67ea5486-e743-42d3-add8-abc8e6a52098) Nov 18 06:31:57.601: INFO: Lookups using dns-7100/dns-test-67ea5486-e743-42d3-add8-abc8e6a52098 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7100.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7100.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7100.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7100.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7100.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7100.svc.cluster.local jessie_udp@dns-test-service-2.dns-7100.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7100.svc.cluster.local] Nov 18 06:32:02.565: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7100.svc.cluster.local from pod dns-7100/dns-test-67ea5486-e743-42d3-add8-abc8e6a52098: the server could not find the requested resource (get pods dns-test-67ea5486-e743-42d3-add8-abc8e6a52098) Nov 18 06:32:02.570: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7100.svc.cluster.local from pod dns-7100/dns-test-67ea5486-e743-42d3-add8-abc8e6a52098: the server could not find the requested resource (get pods dns-test-67ea5486-e743-42d3-add8-abc8e6a52098) Nov 18 06:32:02.574: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7100.svc.cluster.local from pod dns-7100/dns-test-67ea5486-e743-42d3-add8-abc8e6a52098: the server could not find the requested resource (get pods dns-test-67ea5486-e743-42d3-add8-abc8e6a52098) Nov 18 06:32:02.578: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7100.svc.cluster.local from pod dns-7100/dns-test-67ea5486-e743-42d3-add8-abc8e6a52098: the server could not find the requested resource (get pods dns-test-67ea5486-e743-42d3-add8-abc8e6a52098) Nov 18 06:32:02.592: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7100.svc.cluster.local from pod dns-7100/dns-test-67ea5486-e743-42d3-add8-abc8e6a52098: the server could not find the requested resource (get pods dns-test-67ea5486-e743-42d3-add8-abc8e6a52098) Nov 18 06:32:02.597: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7100.svc.cluster.local from pod dns-7100/dns-test-67ea5486-e743-42d3-add8-abc8e6a52098: the server could not find the requested resource (get pods dns-test-67ea5486-e743-42d3-add8-abc8e6a52098) Nov 18 06:32:02.600: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7100.svc.cluster.local from pod dns-7100/dns-test-67ea5486-e743-42d3-add8-abc8e6a52098: the server could not find the requested resource (get pods dns-test-67ea5486-e743-42d3-add8-abc8e6a52098) Nov 18 06:32:02.604: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7100.svc.cluster.local from pod dns-7100/dns-test-67ea5486-e743-42d3-add8-abc8e6a52098: the server could not find the requested resource (get pods dns-test-67ea5486-e743-42d3-add8-abc8e6a52098) Nov 18 06:32:02.613: INFO: Lookups using dns-7100/dns-test-67ea5486-e743-42d3-add8-abc8e6a52098 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7100.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7100.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7100.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7100.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7100.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7100.svc.cluster.local jessie_udp@dns-test-service-2.dns-7100.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7100.svc.cluster.local] Nov 18 06:32:07.627: INFO: DNS probes using dns-7100/dns-test-67ea5486-e743-42d3-add8-abc8e6a52098 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:32:08.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7100" for this suite. • [SLOW TEST:37.080 seconds] [sig-network] DNS /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":303,"completed":42,"skipped":845,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:32:08.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Nov 18 06:32:12.993: INFO: Successfully updated pod "pod-update-0a498acc-4028-42ed-8bc6-84503532e4af" STEP: verifying the updated pod is in kubernetes Nov 18 06:32:13.045: INFO: Pod update OK [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:32:13.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8276" for this suite. • [SLOW TEST:5.011 seconds] [k8s.io] Pods /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":303,"completed":43,"skipped":860,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:32:13.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Nov 18 06:32:17.924: INFO: Successfully updated pod "annotationupdate8c760bba-0d94-4d06-b632-e3dc63305365" [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:32:22.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6391" for this suite. • [SLOW TEST:8.857 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":303,"completed":44,"skipped":863,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:32:22.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Nov 18 06:32:27.170: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:32:28.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-9649" for this suite. • [SLOW TEST:6.195 seconds] [sig-apps] ReplicaSet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":303,"completed":45,"skipped":873,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:32:28.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-6706 STEP: creating a selector STEP: Creating the service pods in kubernetes Nov 18 06:32:28.498: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 18 06:32:28.659: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 18 06:32:30.697: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 18 06:32:32.705: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 18 06:32:34.666: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 18 06:32:36.685: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 18 06:32:38.777: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 18 06:32:40.667: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 18 06:32:42.669: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 18 06:32:44.667: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 18 06:32:46.666: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 18 06:32:48.669: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 18 06:32:50.668: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 18 06:32:52.667: INFO: The status of Pod netserver-0 is Running (Ready = true) Nov 18 06:32:52.676: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Nov 18 06:32:58.778: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.45 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6706 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 18 06:32:58.778: INFO: >>> kubeConfig: /root/.kube/config I1118 06:32:58.848780 10 log.go:181] (0x40003d2420) (0x40043115e0) Create stream I1118 06:32:58.849539 10 log.go:181] (0x40003d2420) (0x40043115e0) Stream added, broadcasting: 1 I1118 06:32:58.869460 10 log.go:181] (0x40003d2420) Reply frame received for 1 I1118 06:32:58.870220 10 log.go:181] (0x40003d2420) (0x4003e89360) Create stream I1118 06:32:58.870299 10 log.go:181] (0x40003d2420) (0x4003e89360) Stream added, broadcasting: 3 I1118 06:32:58.872144 10 log.go:181] (0x40003d2420) Reply frame received for 3 I1118 06:32:58.872368 10 log.go:181] (0x40003d2420) (0x4004311680) Create stream I1118 06:32:58.872426 10 log.go:181] (0x40003d2420) (0x4004311680) Stream added, broadcasting: 5 I1118 06:32:58.873943 10 log.go:181] (0x40003d2420) Reply frame received for 5 I1118 06:32:59.978197 10 log.go:181] (0x40003d2420) Data frame received for 3 I1118 06:32:59.978625 10 log.go:181] (0x40003d2420) Data frame received for 1 I1118 06:32:59.978790 10 log.go:181] (0x40043115e0) (1) Data frame handling I1118 06:32:59.979100 10 log.go:181] (0x4003e89360) (3) Data frame handling I1118 06:32:59.979464 10 log.go:181] (0x40003d2420) Data frame received for 5 I1118 06:32:59.979666 10 log.go:181] (0x4004311680) (5) Data frame handling I1118 06:32:59.981105 10 log.go:181] (0x4003e89360) (3) Data frame sent I1118 06:32:59.981423 10 log.go:181] (0x40043115e0) (1) Data frame sent I1118 06:32:59.981771 10 log.go:181] (0x40003d2420) Data frame received for 3 I1118 06:32:59.981908 10 log.go:181] (0x4003e89360) (3) Data frame handling I1118 06:32:59.984281 10 log.go:181] (0x40003d2420) (0x40043115e0) Stream removed, broadcasting: 1 I1118 06:32:59.986574 10 log.go:181] (0x40003d2420) Go away received I1118 06:32:59.988822 10 log.go:181] (0x40003d2420) (0x40043115e0) Stream removed, broadcasting: 1 I1118 06:32:59.989385 10 log.go:181] (0x40003d2420) (0x4003e89360) Stream removed, broadcasting: 3 I1118 06:32:59.989631 10 log.go:181] (0x40003d2420) (0x4004311680) Stream removed, broadcasting: 5 Nov 18 06:32:59.990: INFO: Found all expected endpoints: [netserver-0] Nov 18 06:32:59.997: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.119 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6706 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 18 06:32:59.997: INFO: >>> kubeConfig: /root/.kube/config I1118 06:33:00.061716 10 log.go:181] (0x40001b06e0) (0x4000dfe780) Create stream I1118 06:33:00.061885 10 log.go:181] (0x40001b06e0) (0x4000dfe780) Stream added, broadcasting: 1 I1118 06:33:00.066808 10 log.go:181] (0x40001b06e0) Reply frame received for 1 I1118 06:33:00.067090 10 log.go:181] (0x40001b06e0) (0x40035340a0) Create stream I1118 06:33:00.067240 10 log.go:181] (0x40001b06e0) (0x40035340a0) Stream added, broadcasting: 3 I1118 06:33:00.069661 10 log.go:181] (0x40001b06e0) Reply frame received for 3 I1118 06:33:00.069988 10 log.go:181] (0x40001b06e0) (0x4001bbe460) Create stream I1118 06:33:00.070129 10 log.go:181] (0x40001b06e0) (0x4001bbe460) Stream added, broadcasting: 5 I1118 06:33:00.072031 10 log.go:181] (0x40001b06e0) Reply frame received for 5 I1118 06:33:01.140462 10 log.go:181] (0x40001b06e0) Data frame received for 5 I1118 06:33:01.140676 10 log.go:181] (0x4001bbe460) (5) Data frame handling I1118 06:33:01.141000 10 log.go:181] (0x40001b06e0) Data frame received for 3 I1118 06:33:01.141131 10 log.go:181] (0x40035340a0) (3) Data frame handling I1118 06:33:01.141308 10 log.go:181] (0x40035340a0) (3) Data frame sent I1118 06:33:01.141453 10 log.go:181] (0x40001b06e0) Data frame received for 3 I1118 06:33:01.141574 10 log.go:181] (0x40035340a0) (3) Data frame handling I1118 06:33:01.143034 10 log.go:181] (0x40001b06e0) Data frame received for 1 I1118 06:33:01.143160 10 log.go:181] (0x4000dfe780) (1) Data frame handling I1118 06:33:01.143282 10 log.go:181] (0x4000dfe780) (1) Data frame sent I1118 06:33:01.143432 10 log.go:181] (0x40001b06e0) (0x4000dfe780) Stream removed, broadcasting: 1 I1118 06:33:01.143590 10 log.go:181] (0x40001b06e0) Go away received I1118 06:33:01.144056 10 log.go:181] (0x40001b06e0) (0x4000dfe780) Stream removed, broadcasting: 1 I1118 06:33:01.144237 10 log.go:181] (0x40001b06e0) (0x40035340a0) Stream removed, broadcasting: 3 I1118 06:33:01.144410 10 log.go:181] (0x40001b06e0) (0x4001bbe460) Stream removed, broadcasting: 5 Nov 18 06:33:01.144: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:33:01.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6706" for this suite. • [SLOW TEST:32.944 seconds] [sig-network] Networking /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":46,"skipped":903,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:33:01.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W1118 06:33:04.724871 10 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 18 06:34:06.994: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:34:06.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-706" for this suite. • [SLOW TEST:65.847 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":303,"completed":47,"skipped":914,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:34:07.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-60ffb1f2-a872-421c-910c-b091c4ec1dc7 STEP: Creating a pod to test consume configMaps Nov 18 06:34:07.094: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-abc9ee14-a3d6-4b9a-84c8-cc572ff7cf65" in namespace "projected-6213" to be "Succeeded or Failed" Nov 18 06:34:07.118: INFO: Pod "pod-projected-configmaps-abc9ee14-a3d6-4b9a-84c8-cc572ff7cf65": Phase="Pending", Reason="", readiness=false. Elapsed: 24.300719ms Nov 18 06:34:09.125: INFO: Pod "pod-projected-configmaps-abc9ee14-a3d6-4b9a-84c8-cc572ff7cf65": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031170569s Nov 18 06:34:11.130: INFO: Pod "pod-projected-configmaps-abc9ee14-a3d6-4b9a-84c8-cc572ff7cf65": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03612181s Nov 18 06:34:13.137: INFO: Pod "pod-projected-configmaps-abc9ee14-a3d6-4b9a-84c8-cc572ff7cf65": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.043053113s STEP: Saw pod success Nov 18 06:34:13.137: INFO: Pod "pod-projected-configmaps-abc9ee14-a3d6-4b9a-84c8-cc572ff7cf65" satisfied condition "Succeeded or Failed" Nov 18 06:34:13.142: INFO: Trying to get logs from node leguer-worker2 pod pod-projected-configmaps-abc9ee14-a3d6-4b9a-84c8-cc572ff7cf65 container projected-configmap-volume-test: STEP: delete the pod Nov 18 06:34:13.251: INFO: Waiting for pod pod-projected-configmaps-abc9ee14-a3d6-4b9a-84c8-cc572ff7cf65 to disappear Nov 18 06:34:13.255: INFO: Pod pod-projected-configmaps-abc9ee14-a3d6-4b9a-84c8-cc572ff7cf65 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:34:13.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6213" for this suite. • [SLOW TEST:6.254 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":303,"completed":48,"skipped":928,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:34:13.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-b35916b6-fe9f-40f4-924f-9f5257f6daf4 STEP: Creating configMap with name cm-test-opt-upd-540f527c-125a-40ac-ac79-05a313159841 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-b35916b6-fe9f-40f4-924f-9f5257f6daf4 STEP: Updating configmap cm-test-opt-upd-540f527c-125a-40ac-ac79-05a313159841 STEP: Creating configMap with name cm-test-opt-create-597555e9-7020-46df-a197-8ccfe3455d08 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:34:23.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8128" for this suite. • [SLOW TEST:10.249 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":49,"skipped":938,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:34:23.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Nov 18 06:34:23.653: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:34:40.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8424" for this suite. • [SLOW TEST:16.813 seconds] [k8s.io] Pods /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":303,"completed":50,"skipped":946,"failed":0} SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:34:40.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:34:40.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6124" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":303,"completed":51,"skipped":948,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:34:40.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Nov 18 06:34:45.343: INFO: Successfully updated pod "labelsupdate25d2fc2d-ec3a-4236-968b-498faa439c06" [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:34:47.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6538" for this suite. • [SLOW TEST:6.819 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":303,"completed":52,"skipped":951,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:34:47.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:34:51.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9781" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":53,"skipped":995,"failed":0} SS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:34:51.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:163 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:34:51.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2681" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":303,"completed":54,"skipped":997,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:34:51.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Nov 18 06:34:51.799: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-5735 /api/v1/namespaces/watch-5735/configmaps/e2e-watch-test-resource-version 17d3a26b-0992-4cf5-b6ba-c72938de0c34 11985484 0 2020-11-18 06:34:51 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-11-18 06:34:51 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Nov 18 06:34:51.801: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-5735 /api/v1/namespaces/watch-5735/configmaps/e2e-watch-test-resource-version 17d3a26b-0992-4cf5-b6ba-c72938de0c34 11985485 0 2020-11-18 06:34:51 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-11-18 06:34:51 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:34:51.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5735" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":303,"completed":55,"skipped":1039,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:34:51.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD Nov 18 06:34:51.930: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:36:59.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6043" for this suite. • [SLOW TEST:127.225 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":303,"completed":56,"skipped":1060,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:36:59.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-ee0afaff-2edc-4fc2-83ea-b975b52bac28 STEP: Creating a pod to test consume configMaps Nov 18 06:36:59.160: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-127d9b5f-179e-45a0-a353-c13b7b3d5882" in namespace "projected-4374" to be "Succeeded or Failed" Nov 18 06:36:59.176: INFO: Pod "pod-projected-configmaps-127d9b5f-179e-45a0-a353-c13b7b3d5882": Phase="Pending", Reason="", readiness=false. Elapsed: 16.655884ms Nov 18 06:37:01.221: INFO: Pod "pod-projected-configmaps-127d9b5f-179e-45a0-a353-c13b7b3d5882": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061002052s Nov 18 06:37:03.231: INFO: Pod "pod-projected-configmaps-127d9b5f-179e-45a0-a353-c13b7b3d5882": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.07108451s STEP: Saw pod success Nov 18 06:37:03.231: INFO: Pod "pod-projected-configmaps-127d9b5f-179e-45a0-a353-c13b7b3d5882" satisfied condition "Succeeded or Failed" Nov 18 06:37:03.237: INFO: Trying to get logs from node leguer-worker2 pod pod-projected-configmaps-127d9b5f-179e-45a0-a353-c13b7b3d5882 container projected-configmap-volume-test: STEP: delete the pod Nov 18 06:37:03.308: INFO: Waiting for pod pod-projected-configmaps-127d9b5f-179e-45a0-a353-c13b7b3d5882 to disappear Nov 18 06:37:03.313: INFO: Pod pod-projected-configmaps-127d9b5f-179e-45a0-a353-c13b7b3d5882 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:37:03.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4374" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":57,"skipped":1108,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:37:03.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-6473 STEP: creating a selector STEP: Creating the service pods in kubernetes Nov 18 06:37:03.398: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 18 06:37:03.500: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 18 06:37:05.529: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 18 06:37:07.513: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 18 06:37:09.509: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 18 06:37:11.508: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 18 06:37:13.509: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 18 06:37:15.510: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 18 06:37:17.508: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 18 06:37:19.509: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 18 06:37:21.508: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 18 06:37:23.510: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 18 06:37:25.508: INFO: The status of Pod netserver-0 is Running (Ready = true) Nov 18 06:37:25.519: INFO: The status of Pod netserver-1 is Running (Ready = false) Nov 18 06:37:27.527: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Nov 18 06:37:31.582: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.127:8080/dial?request=hostname&protocol=udp&host=10.244.2.50&port=8081&tries=1'] Namespace:pod-network-test-6473 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 18 06:37:31.582: INFO: >>> kubeConfig: /root/.kube/config I1118 06:37:31.648357 10 log.go:181] (0x40026e8420) (0x4003e897c0) Create stream I1118 06:37:31.648504 10 log.go:181] (0x40026e8420) (0x4003e897c0) Stream added, broadcasting: 1 I1118 06:37:31.652500 10 log.go:181] (0x40026e8420) Reply frame received for 1 I1118 06:37:31.652973 10 log.go:181] (0x40026e8420) (0x4003e89860) Create stream I1118 06:37:31.653158 10 log.go:181] (0x40026e8420) (0x4003e89860) Stream added, broadcasting: 3 I1118 06:37:31.655227 10 log.go:181] (0x40026e8420) Reply frame received for 3 I1118 06:37:31.655403 10 log.go:181] (0x40026e8420) (0x4003e89900) Create stream I1118 06:37:31.655485 10 log.go:181] (0x40026e8420) (0x4003e89900) Stream added, broadcasting: 5 I1118 06:37:31.657509 10 log.go:181] (0x40026e8420) Reply frame received for 5 I1118 06:37:31.767161 10 log.go:181] (0x40026e8420) Data frame received for 3 I1118 06:37:31.767356 10 log.go:181] (0x4003e89860) (3) Data frame handling I1118 06:37:31.767522 10 log.go:181] (0x40026e8420) Data frame received for 5 I1118 06:37:31.767725 10 log.go:181] (0x4003e89900) (5) Data frame handling I1118 06:37:31.767838 10 log.go:181] (0x4003e89860) (3) Data frame sent I1118 06:37:31.768026 10 log.go:181] (0x40026e8420) Data frame received for 3 I1118 06:37:31.768134 10 log.go:181] (0x4003e89860) (3) Data frame handling I1118 06:37:31.769656 10 log.go:181] (0x40026e8420) Data frame received for 1 I1118 06:37:31.769761 10 log.go:181] (0x4003e897c0) (1) Data frame handling I1118 06:37:31.769869 10 log.go:181] (0x4003e897c0) (1) Data frame sent I1118 06:37:31.770000 10 log.go:181] (0x40026e8420) (0x4003e897c0) Stream removed, broadcasting: 1 I1118 06:37:31.770156 10 log.go:181] (0x40026e8420) Go away received I1118 06:37:31.770658 10 log.go:181] (0x40026e8420) (0x4003e897c0) Stream removed, broadcasting: 1 I1118 06:37:31.770820 10 log.go:181] (0x40026e8420) (0x4003e89860) Stream removed, broadcasting: 3 I1118 06:37:31.771006 10 log.go:181] (0x40026e8420) (0x4003e89900) Stream removed, broadcasting: 5 Nov 18 06:37:31.772: INFO: Waiting for responses: map[] Nov 18 06:37:31.779: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.127:8080/dial?request=hostname&protocol=udp&host=10.244.1.126&port=8081&tries=1'] Namespace:pod-network-test-6473 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 18 06:37:31.779: INFO: >>> kubeConfig: /root/.kube/config I1118 06:37:31.825260 10 log.go:181] (0x40003d2000) (0x4000dfe820) Create stream I1118 06:37:31.825392 10 log.go:181] (0x40003d2000) (0x4000dfe820) Stream added, broadcasting: 1 I1118 06:37:31.828318 10 log.go:181] (0x40003d2000) Reply frame received for 1 I1118 06:37:31.828443 10 log.go:181] (0x40003d2000) (0x4003e899a0) Create stream I1118 06:37:31.828515 10 log.go:181] (0x40003d2000) (0x4003e899a0) Stream added, broadcasting: 3 I1118 06:37:31.830055 10 log.go:181] (0x40003d2000) Reply frame received for 3 I1118 06:37:31.830250 10 log.go:181] (0x40003d2000) (0x4003e89a40) Create stream I1118 06:37:31.830358 10 log.go:181] (0x40003d2000) (0x4003e89a40) Stream added, broadcasting: 5 I1118 06:37:31.832001 10 log.go:181] (0x40003d2000) Reply frame received for 5 I1118 06:37:31.901220 10 log.go:181] (0x40003d2000) Data frame received for 3 I1118 06:37:31.901448 10 log.go:181] (0x4003e899a0) (3) Data frame handling I1118 06:37:31.901669 10 log.go:181] (0x4003e899a0) (3) Data frame sent I1118 06:37:31.901874 10 log.go:181] (0x40003d2000) Data frame received for 5 I1118 06:37:31.902067 10 log.go:181] (0x4003e89a40) (5) Data frame handling I1118 06:37:31.902190 10 log.go:181] (0x40003d2000) Data frame received for 3 I1118 06:37:31.902295 10 log.go:181] (0x4003e899a0) (3) Data frame handling I1118 06:37:31.903309 10 log.go:181] (0x40003d2000) Data frame received for 1 I1118 06:37:31.903415 10 log.go:181] (0x4000dfe820) (1) Data frame handling I1118 06:37:31.903541 10 log.go:181] (0x4000dfe820) (1) Data frame sent I1118 06:37:31.903642 10 log.go:181] (0x40003d2000) (0x4000dfe820) Stream removed, broadcasting: 1 I1118 06:37:31.903762 10 log.go:181] (0x40003d2000) Go away received I1118 06:37:31.904211 10 log.go:181] (0x40003d2000) (0x4000dfe820) Stream removed, broadcasting: 1 I1118 06:37:31.904336 10 log.go:181] (0x40003d2000) (0x4003e899a0) Stream removed, broadcasting: 3 I1118 06:37:31.904436 10 log.go:181] (0x40003d2000) (0x4003e89a40) Stream removed, broadcasting: 5 Nov 18 06:37:31.904: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:37:31.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6473" for this suite. • [SLOW TEST:28.591 seconds] [sig-network] Networking /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":303,"completed":58,"skipped":1128,"failed":0} SSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:37:31.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Nov 18 06:37:32.026: INFO: Waiting up to 5m0s for pod "downward-api-18f495ec-548e-48b7-8df8-7c858ebd9917" in namespace "downward-api-3773" to be "Succeeded or Failed" Nov 18 06:37:32.041: INFO: Pod "downward-api-18f495ec-548e-48b7-8df8-7c858ebd9917": Phase="Pending", Reason="", readiness=false. Elapsed: 14.735379ms Nov 18 06:37:34.049: INFO: Pod "downward-api-18f495ec-548e-48b7-8df8-7c858ebd9917": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022884258s Nov 18 06:37:36.058: INFO: Pod "downward-api-18f495ec-548e-48b7-8df8-7c858ebd9917": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032630276s STEP: Saw pod success Nov 18 06:37:36.059: INFO: Pod "downward-api-18f495ec-548e-48b7-8df8-7c858ebd9917" satisfied condition "Succeeded or Failed" Nov 18 06:37:36.064: INFO: Trying to get logs from node leguer-worker2 pod downward-api-18f495ec-548e-48b7-8df8-7c858ebd9917 container dapi-container: STEP: delete the pod Nov 18 06:37:36.099: INFO: Waiting for pod downward-api-18f495ec-548e-48b7-8df8-7c858ebd9917 to disappear Nov 18 06:37:36.118: INFO: Pod downward-api-18f495ec-548e-48b7-8df8-7c858ebd9917 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:37:36.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3773" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":303,"completed":59,"skipped":1139,"failed":0} SSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:37:36.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service multi-endpoint-test in namespace services-9140 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9140 to expose endpoints map[] Nov 18 06:37:36.285: INFO: successfully validated that service multi-endpoint-test in namespace services-9140 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-9140 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9140 to expose endpoints map[pod1:[100]] Nov 18 06:37:40.463: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]], will retry Nov 18 06:37:42.399: INFO: successfully validated that service multi-endpoint-test in namespace services-9140 exposes endpoints map[pod1:[100]] STEP: Creating pod pod2 in namespace services-9140 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9140 to expose endpoints map[pod1:[100] pod2:[101]] Nov 18 06:37:46.478: INFO: successfully validated that service multi-endpoint-test in namespace services-9140 exposes endpoints map[pod1:[100] pod2:[101]] STEP: Deleting pod pod1 in namespace services-9140 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9140 to expose endpoints map[pod2:[101]] Nov 18 06:37:46.566: INFO: successfully validated that service multi-endpoint-test in namespace services-9140 exposes endpoints map[pod2:[101]] STEP: Deleting pod pod2 in namespace services-9140 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9140 to expose endpoints map[] Nov 18 06:37:47.610: INFO: successfully validated that service multi-endpoint-test in namespace services-9140 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:37:47.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9140" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:11.658 seconds] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":303,"completed":60,"skipped":1143,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:37:47.791: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-5039 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet Nov 18 06:37:48.338: INFO: Found 0 stateful pods, waiting for 3 Nov 18 06:37:58.351: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Nov 18 06:37:58.351: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Nov 18 06:37:58.351: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Nov 18 06:38:08.351: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Nov 18 06:38:08.351: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Nov 18 06:38:08.351: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Nov 18 06:38:08.402: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Nov 18 06:38:18.487: INFO: Updating stateful set ss2 Nov 18 06:38:18.565: INFO: Waiting for Pod statefulset-5039/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Nov 18 06:38:29.055: INFO: Found 2 stateful pods, waiting for 3 Nov 18 06:38:39.208: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Nov 18 06:38:39.208: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Nov 18 06:38:39.208: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Nov 18 06:38:39.243: INFO: Updating stateful set ss2 Nov 18 06:38:39.278: INFO: Waiting for Pod statefulset-5039/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Nov 18 06:38:49.293: INFO: Waiting for Pod statefulset-5039/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Nov 18 06:38:59.321: INFO: Updating stateful set ss2 Nov 18 06:38:59.351: INFO: Waiting for StatefulSet statefulset-5039/ss2 to complete update Nov 18 06:38:59.352: INFO: Waiting for Pod statefulset-5039/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Nov 18 06:39:09.379: INFO: Deleting all statefulset in ns statefulset-5039 Nov 18 06:39:09.429: INFO: Scaling statefulset ss2 to 0 Nov 18 06:39:19.458: INFO: Waiting for statefulset status.replicas updated to 0 Nov 18 06:39:19.463: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:39:19.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5039" for this suite. • [SLOW TEST:91.722 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":303,"completed":61,"skipped":1156,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Events should delete a collection of events [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Events /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:39:19.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of events [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of events Nov 18 06:39:19.656: INFO: created test-event-1 Nov 18 06:39:19.661: INFO: created test-event-2 Nov 18 06:39:19.678: INFO: created test-event-3 STEP: get a list of Events with a label in the current namespace STEP: delete collection of events Nov 18 06:39:19.753: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity Nov 18 06:39:19.782: INFO: requesting list of events to confirm quantity [AfterEach] [sig-api-machinery] Events /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:39:19.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-8512" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should delete a collection of events [Conformance]","total":303,"completed":62,"skipped":1164,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:39:19.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-da6ec964-fb17-4f33-bcf5-175bf7b0d44f STEP: Creating a pod to test consume configMaps Nov 18 06:39:19.921: INFO: Waiting up to 5m0s for pod "pod-configmaps-c0d1b55c-3607-488b-9ba9-954d92662ea2" in namespace "configmap-9226" to be "Succeeded or Failed" Nov 18 06:39:19.950: INFO: Pod "pod-configmaps-c0d1b55c-3607-488b-9ba9-954d92662ea2": Phase="Pending", Reason="", readiness=false. Elapsed: 29.196909ms Nov 18 06:39:21.957: INFO: Pod "pod-configmaps-c0d1b55c-3607-488b-9ba9-954d92662ea2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036418632s Nov 18 06:39:23.965: INFO: Pod "pod-configmaps-c0d1b55c-3607-488b-9ba9-954d92662ea2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044392097s STEP: Saw pod success Nov 18 06:39:23.966: INFO: Pod "pod-configmaps-c0d1b55c-3607-488b-9ba9-954d92662ea2" satisfied condition "Succeeded or Failed" Nov 18 06:39:23.971: INFO: Trying to get logs from node leguer-worker2 pod pod-configmaps-c0d1b55c-3607-488b-9ba9-954d92662ea2 container configmap-volume-test: STEP: delete the pod Nov 18 06:39:24.053: INFO: Waiting for pod pod-configmaps-c0d1b55c-3607-488b-9ba9-954d92662ea2 to disappear Nov 18 06:39:24.218: INFO: Pod pod-configmaps-c0d1b55c-3607-488b-9ba9-954d92662ea2 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:39:24.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9226" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":303,"completed":63,"skipped":1194,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:39:24.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's command Nov 18 06:39:24.361: INFO: Waiting up to 5m0s for pod "var-expansion-1ca59866-776a-44dc-9b9b-08434b4198a0" in namespace "var-expansion-6819" to be "Succeeded or Failed" Nov 18 06:39:24.383: INFO: Pod "var-expansion-1ca59866-776a-44dc-9b9b-08434b4198a0": Phase="Pending", Reason="", readiness=false. Elapsed: 21.255461ms Nov 18 06:39:26.391: INFO: Pod "var-expansion-1ca59866-776a-44dc-9b9b-08434b4198a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029560545s Nov 18 06:39:28.398: INFO: Pod "var-expansion-1ca59866-776a-44dc-9b9b-08434b4198a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036957039s STEP: Saw pod success Nov 18 06:39:28.399: INFO: Pod "var-expansion-1ca59866-776a-44dc-9b9b-08434b4198a0" satisfied condition "Succeeded or Failed" Nov 18 06:39:28.403: INFO: Trying to get logs from node leguer-worker pod var-expansion-1ca59866-776a-44dc-9b9b-08434b4198a0 container dapi-container: STEP: delete the pod Nov 18 06:39:28.471: INFO: Waiting for pod var-expansion-1ca59866-776a-44dc-9b9b-08434b4198a0 to disappear Nov 18 06:39:28.479: INFO: Pod var-expansion-1ca59866-776a-44dc-9b9b-08434b4198a0 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:39:28.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6819" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":303,"completed":64,"skipped":1210,"failed":0} SSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:39:28.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Nov 18 06:39:28.554: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 18 06:39:28.572: INFO: Waiting for terminating namespaces to be deleted... Nov 18 06:39:28.583: INFO: Logging pods the apiserver thinks is on node leguer-worker before test Nov 18 06:39:28.591: INFO: kindnet-lc95n from kube-system started at 2020-10-04 09:51:31 +0000 UTC (1 container statuses recorded) Nov 18 06:39:28.591: INFO: Container kindnet-cni ready: true, restart count 1 Nov 18 06:39:28.591: INFO: kube-proxy-bmzvg from kube-system started at 2020-10-04 09:51:31 +0000 UTC (1 container statuses recorded) Nov 18 06:39:28.591: INFO: Container kube-proxy ready: true, restart count 0 Nov 18 06:39:28.591: INFO: Logging pods the apiserver thinks is on node leguer-worker2 before test Nov 18 06:39:28.599: INFO: kindnet-nffr7 from kube-system started at 2020-10-04 09:51:31 +0000 UTC (1 container statuses recorded) Nov 18 06:39:28.599: INFO: Container kindnet-cni ready: true, restart count 1 Nov 18 06:39:28.599: INFO: kube-proxy-sxhc5 from kube-system started at 2020-10-04 09:51:30 +0000 UTC (1 container statuses recorded) Nov 18 06:39:28.599: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.164886e122502dfa], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.164886e123691eea], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:39:29.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4886" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":303,"completed":65,"skipped":1217,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:39:29.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:39:46.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4137" for this suite. • [SLOW TEST:16.510 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":303,"completed":66,"skipped":1241,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:39:46.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-projected-kw5m STEP: Creating a pod to test atomic-volume-subpath Nov 18 06:39:46.299: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-kw5m" in namespace "subpath-6744" to be "Succeeded or Failed" Nov 18 06:39:46.304: INFO: Pod "pod-subpath-test-projected-kw5m": Phase="Pending", Reason="", readiness=false. Elapsed: 4.889434ms Nov 18 06:39:48.313: INFO: Pod "pod-subpath-test-projected-kw5m": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013754528s Nov 18 06:39:50.321: INFO: Pod "pod-subpath-test-projected-kw5m": Phase="Running", Reason="", readiness=true. Elapsed: 4.02162723s Nov 18 06:39:52.330: INFO: Pod "pod-subpath-test-projected-kw5m": Phase="Running", Reason="", readiness=true. Elapsed: 6.030041566s Nov 18 06:39:54.337: INFO: Pod "pod-subpath-test-projected-kw5m": Phase="Running", Reason="", readiness=true. Elapsed: 8.037854324s Nov 18 06:39:56.345: INFO: Pod "pod-subpath-test-projected-kw5m": Phase="Running", Reason="", readiness=true. Elapsed: 10.0457017s Nov 18 06:39:58.354: INFO: Pod "pod-subpath-test-projected-kw5m": Phase="Running", Reason="", readiness=true. Elapsed: 12.054603207s Nov 18 06:40:00.361: INFO: Pod "pod-subpath-test-projected-kw5m": Phase="Running", Reason="", readiness=true. Elapsed: 14.061565757s Nov 18 06:40:02.369: INFO: Pod "pod-subpath-test-projected-kw5m": Phase="Running", Reason="", readiness=true. Elapsed: 16.069571887s Nov 18 06:40:04.377: INFO: Pod "pod-subpath-test-projected-kw5m": Phase="Running", Reason="", readiness=true. Elapsed: 18.077863016s Nov 18 06:40:06.386: INFO: Pod "pod-subpath-test-projected-kw5m": Phase="Running", Reason="", readiness=true. Elapsed: 20.08621652s Nov 18 06:40:08.394: INFO: Pod "pod-subpath-test-projected-kw5m": Phase="Running", Reason="", readiness=true. Elapsed: 22.094128581s Nov 18 06:40:10.419: INFO: Pod "pod-subpath-test-projected-kw5m": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.119805706s STEP: Saw pod success Nov 18 06:40:10.420: INFO: Pod "pod-subpath-test-projected-kw5m" satisfied condition "Succeeded or Failed" Nov 18 06:40:10.425: INFO: Trying to get logs from node leguer-worker2 pod pod-subpath-test-projected-kw5m container test-container-subpath-projected-kw5m: STEP: delete the pod Nov 18 06:40:10.448: INFO: Waiting for pod pod-subpath-test-projected-kw5m to disappear Nov 18 06:40:10.471: INFO: Pod pod-subpath-test-projected-kw5m no longer exists STEP: Deleting pod pod-subpath-test-projected-kw5m Nov 18 06:40:10.472: INFO: Deleting pod "pod-subpath-test-projected-kw5m" in namespace "subpath-6744" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:40:10.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6744" for this suite. • [SLOW TEST:24.317 seconds] [sig-storage] Subpath /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":303,"completed":67,"skipped":1254,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:40:10.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override command Nov 18 06:40:10.592: INFO: Waiting up to 5m0s for pod "client-containers-499c9321-9312-444b-9236-ce4e010e6da2" in namespace "containers-8234" to be "Succeeded or Failed" Nov 18 06:40:10.596: INFO: Pod "client-containers-499c9321-9312-444b-9236-ce4e010e6da2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.49971ms Nov 18 06:40:12.633: INFO: Pod "client-containers-499c9321-9312-444b-9236-ce4e010e6da2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041531439s Nov 18 06:40:14.640: INFO: Pod "client-containers-499c9321-9312-444b-9236-ce4e010e6da2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048218524s Nov 18 06:40:16.647: INFO: Pod "client-containers-499c9321-9312-444b-9236-ce4e010e6da2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.055525448s STEP: Saw pod success Nov 18 06:40:16.648: INFO: Pod "client-containers-499c9321-9312-444b-9236-ce4e010e6da2" satisfied condition "Succeeded or Failed" Nov 18 06:40:16.652: INFO: Trying to get logs from node leguer-worker pod client-containers-499c9321-9312-444b-9236-ce4e010e6da2 container test-container: STEP: delete the pod Nov 18 06:40:16.704: INFO: Waiting for pod client-containers-499c9321-9312-444b-9236-ce4e010e6da2 to disappear Nov 18 06:40:16.708: INFO: Pod client-containers-499c9321-9312-444b-9236-ce4e010e6da2 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:40:16.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8234" for this suite. • [SLOW TEST:6.230 seconds] [k8s.io] Docker Containers /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":303,"completed":68,"skipped":1266,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:40:16.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 18 06:40:19.722: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 18 06:40:21.741: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741278419, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741278419, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741278419, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741278419, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 18 06:40:24.781: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:40:35.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3166" for this suite. STEP: Destroying namespace "webhook-3166-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.362 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":303,"completed":69,"skipped":1267,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:40:35.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-78d2c513-661b-484c-a7d6-42214185f3a4 STEP: Creating a pod to test consume secrets Nov 18 06:40:35.248: INFO: Waiting up to 5m0s for pod "pod-secrets-b670282f-e59e-41fb-bbbe-5fca80b97297" in namespace "secrets-6058" to be "Succeeded or Failed" Nov 18 06:40:35.285: INFO: Pod "pod-secrets-b670282f-e59e-41fb-bbbe-5fca80b97297": Phase="Pending", Reason="", readiness=false. Elapsed: 37.380115ms Nov 18 06:40:37.293: INFO: Pod "pod-secrets-b670282f-e59e-41fb-bbbe-5fca80b97297": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045387347s Nov 18 06:40:39.304: INFO: Pod "pod-secrets-b670282f-e59e-41fb-bbbe-5fca80b97297": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05623461s STEP: Saw pod success Nov 18 06:40:39.305: INFO: Pod "pod-secrets-b670282f-e59e-41fb-bbbe-5fca80b97297" satisfied condition "Succeeded or Failed" Nov 18 06:40:39.310: INFO: Trying to get logs from node leguer-worker2 pod pod-secrets-b670282f-e59e-41fb-bbbe-5fca80b97297 container secret-volume-test: STEP: delete the pod Nov 18 06:40:39.389: INFO: Waiting for pod pod-secrets-b670282f-e59e-41fb-bbbe-5fca80b97297 to disappear Nov 18 06:40:39.405: INFO: Pod pod-secrets-b670282f-e59e-41fb-bbbe-5fca80b97297 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:40:39.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6058" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":303,"completed":70,"skipped":1286,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:40:39.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-nhdq STEP: Creating a pod to test atomic-volume-subpath Nov 18 06:40:39.570: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-nhdq" in namespace "subpath-5695" to be "Succeeded or Failed" Nov 18 06:40:39.605: INFO: Pod "pod-subpath-test-configmap-nhdq": Phase="Pending", Reason="", readiness=false. Elapsed: 35.204351ms Nov 18 06:40:41.611: INFO: Pod "pod-subpath-test-configmap-nhdq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040927722s Nov 18 06:40:43.618: INFO: Pod "pod-subpath-test-configmap-nhdq": Phase="Running", Reason="", readiness=true. Elapsed: 4.048528835s Nov 18 06:40:46.929: INFO: Pod "pod-subpath-test-configmap-nhdq": Phase="Running", Reason="", readiness=true. Elapsed: 7.358985297s Nov 18 06:40:48.936: INFO: Pod "pod-subpath-test-configmap-nhdq": Phase="Running", Reason="", readiness=true. Elapsed: 9.366618149s Nov 18 06:40:50.943: INFO: Pod "pod-subpath-test-configmap-nhdq": Phase="Running", Reason="", readiness=true. Elapsed: 11.373146509s Nov 18 06:40:52.949: INFO: Pod "pod-subpath-test-configmap-nhdq": Phase="Running", Reason="", readiness=true. Elapsed: 13.37908142s Nov 18 06:40:54.956: INFO: Pod "pod-subpath-test-configmap-nhdq": Phase="Running", Reason="", readiness=true. Elapsed: 15.386441218s Nov 18 06:40:56.963: INFO: Pod "pod-subpath-test-configmap-nhdq": Phase="Running", Reason="", readiness=true. Elapsed: 17.392823371s Nov 18 06:40:58.969: INFO: Pod "pod-subpath-test-configmap-nhdq": Phase="Running", Reason="", readiness=true. Elapsed: 19.399524227s Nov 18 06:41:00.991: INFO: Pod "pod-subpath-test-configmap-nhdq": Phase="Running", Reason="", readiness=true. Elapsed: 21.421206646s Nov 18 06:41:03.000: INFO: Pod "pod-subpath-test-configmap-nhdq": Phase="Running", Reason="", readiness=true. Elapsed: 23.429818152s Nov 18 06:41:05.009: INFO: Pod "pod-subpath-test-configmap-nhdq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.438908108s STEP: Saw pod success Nov 18 06:41:05.009: INFO: Pod "pod-subpath-test-configmap-nhdq" satisfied condition "Succeeded or Failed" Nov 18 06:41:05.014: INFO: Trying to get logs from node leguer-worker pod pod-subpath-test-configmap-nhdq container test-container-subpath-configmap-nhdq: STEP: delete the pod Nov 18 06:41:05.080: INFO: Waiting for pod pod-subpath-test-configmap-nhdq to disappear Nov 18 06:41:05.092: INFO: Pod pod-subpath-test-configmap-nhdq no longer exists STEP: Deleting pod pod-subpath-test-configmap-nhdq Nov 18 06:41:05.092: INFO: Deleting pod "pod-subpath-test-configmap-nhdq" in namespace "subpath-5695" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:41:05.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5695" for this suite. • [SLOW TEST:25.686 seconds] [sig-storage] Subpath /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":303,"completed":71,"skipped":1298,"failed":0} [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:41:05.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should run through the lifecycle of a ServiceAccount [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a ServiceAccount STEP: watching for the ServiceAccount to be added STEP: patching the ServiceAccount STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) STEP: deleting the ServiceAccount [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:41:05.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-4776" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":303,"completed":72,"skipped":1298,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Service endpoints latency /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:41:05.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 18 06:41:05.461: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-4270 I1118 06:41:05.531831 10 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-4270, replica count: 1 I1118 06:41:06.583277 10 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1118 06:41:07.584147 10 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1118 06:41:08.584961 10 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1118 06:41:09.585756 10 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 18 06:41:09.723: INFO: Created: latency-svc-x8gjd Nov 18 06:41:09.752: INFO: Got endpoints: latency-svc-x8gjd [63.733226ms] Nov 18 06:41:09.784: INFO: Created: latency-svc-fnsqf Nov 18 06:41:09.832: INFO: Got endpoints: latency-svc-fnsqf [79.485137ms] Nov 18 06:41:09.857: INFO: Created: latency-svc-fbbtz Nov 18 06:41:09.876: INFO: Got endpoints: latency-svc-fbbtz [122.793387ms] Nov 18 06:41:09.912: INFO: Created: latency-svc-g96rs Nov 18 06:41:09.929: INFO: Got endpoints: latency-svc-g96rs [175.024414ms] Nov 18 06:41:09.964: INFO: Created: latency-svc-qdjc6 Nov 18 06:41:09.972: INFO: Got endpoints: latency-svc-qdjc6 [217.568098ms] Nov 18 06:41:10.001: INFO: Created: latency-svc-5vvds Nov 18 06:41:10.018: INFO: Got endpoints: latency-svc-5vvds [264.816027ms] Nov 18 06:41:10.050: INFO: Created: latency-svc-fg8pn Nov 18 06:41:10.121: INFO: Got endpoints: latency-svc-fg8pn [366.814872ms] Nov 18 06:41:10.122: INFO: Created: latency-svc-bm4sx Nov 18 06:41:10.146: INFO: Got endpoints: latency-svc-bm4sx [391.780351ms] Nov 18 06:41:10.174: INFO: Created: latency-svc-t8jhk Nov 18 06:41:10.194: INFO: Got endpoints: latency-svc-t8jhk [440.975374ms] Nov 18 06:41:10.322: INFO: Created: latency-svc-dnhjb Nov 18 06:41:10.355: INFO: Got endpoints: latency-svc-dnhjb [601.2652ms] Nov 18 06:41:10.431: INFO: Created: latency-svc-bmb48 Nov 18 06:41:10.440: INFO: Got endpoints: latency-svc-bmb48 [686.288176ms] Nov 18 06:41:10.464: INFO: Created: latency-svc-nqlmg Nov 18 06:41:10.477: INFO: Got endpoints: latency-svc-nqlmg [724.452083ms] Nov 18 06:41:10.525: INFO: Created: latency-svc-bmbdc Nov 18 06:41:10.589: INFO: Got endpoints: latency-svc-bmbdc [835.321148ms] Nov 18 06:41:10.619: INFO: Created: latency-svc-7ws8s Nov 18 06:41:10.639: INFO: Got endpoints: latency-svc-7ws8s [884.782211ms] Nov 18 06:41:10.679: INFO: Created: latency-svc-x9jpk Nov 18 06:41:10.718: INFO: Got endpoints: latency-svc-x9jpk [963.743273ms] Nov 18 06:41:10.752: INFO: Created: latency-svc-wbvgh Nov 18 06:41:10.772: INFO: Got endpoints: latency-svc-wbvgh [1.017096876s] Nov 18 06:41:10.815: INFO: Created: latency-svc-m9rnm Nov 18 06:41:10.886: INFO: Got endpoints: latency-svc-m9rnm [167.696927ms] Nov 18 06:41:10.918: INFO: Created: latency-svc-fnfp5 Nov 18 06:41:10.963: INFO: Got endpoints: latency-svc-fnfp5 [1.130306126s] Nov 18 06:41:11.024: INFO: Created: latency-svc-nnpgp Nov 18 06:41:11.052: INFO: Got endpoints: latency-svc-nnpgp [1.176227637s] Nov 18 06:41:11.056: INFO: Created: latency-svc-m6mbb Nov 18 06:41:11.074: INFO: Got endpoints: latency-svc-m6mbb [1.144424081s] Nov 18 06:41:11.117: INFO: Created: latency-svc-4lm86 Nov 18 06:41:11.156: INFO: Got endpoints: latency-svc-4lm86 [1.183542989s] Nov 18 06:41:11.215: INFO: Created: latency-svc-fnrpl Nov 18 06:41:11.232: INFO: Got endpoints: latency-svc-fnrpl [1.213685928s] Nov 18 06:41:11.317: INFO: Created: latency-svc-p8jcr Nov 18 06:41:11.323: INFO: Got endpoints: latency-svc-p8jcr [1.200958682s] Nov 18 06:41:11.386: INFO: Created: latency-svc-8bbts Nov 18 06:41:11.404: INFO: Got endpoints: latency-svc-8bbts [1.257968077s] Nov 18 06:41:11.457: INFO: Created: latency-svc-vkkzh Nov 18 06:41:11.470: INFO: Got endpoints: latency-svc-vkkzh [1.275259017s] Nov 18 06:41:11.489: INFO: Created: latency-svc-kp8zt Nov 18 06:41:11.505: INFO: Got endpoints: latency-svc-kp8zt [1.150113143s] Nov 18 06:41:11.542: INFO: Created: latency-svc-vtflh Nov 18 06:41:11.554: INFO: Got endpoints: latency-svc-vtflh [1.113530951s] Nov 18 06:41:11.626: INFO: Created: latency-svc-55knl Nov 18 06:41:11.652: INFO: Got endpoints: latency-svc-55knl [1.174904067s] Nov 18 06:41:11.693: INFO: Created: latency-svc-xx57x Nov 18 06:41:11.748: INFO: Got endpoints: latency-svc-xx57x [1.158750769s] Nov 18 06:41:11.765: INFO: Created: latency-svc-fxl95 Nov 18 06:41:11.787: INFO: Got endpoints: latency-svc-fxl95 [1.148596645s] Nov 18 06:41:11.823: INFO: Created: latency-svc-qdsnj Nov 18 06:41:11.842: INFO: Got endpoints: latency-svc-qdsnj [1.070386343s] Nov 18 06:41:11.927: INFO: Created: latency-svc-rm2gp Nov 18 06:41:11.952: INFO: Got endpoints: latency-svc-rm2gp [1.065226731s] Nov 18 06:41:12.079: INFO: Created: latency-svc-4rzrg Nov 18 06:41:12.083: INFO: Got endpoints: latency-svc-4rzrg [1.119387741s] Nov 18 06:41:12.132: INFO: Created: latency-svc-mpb9l Nov 18 06:41:12.233: INFO: Got endpoints: latency-svc-mpb9l [1.180336039s] Nov 18 06:41:12.279: INFO: Created: latency-svc-d8b2s Nov 18 06:41:12.304: INFO: Got endpoints: latency-svc-d8b2s [1.229634352s] Nov 18 06:41:12.389: INFO: Created: latency-svc-7w6xd Nov 18 06:41:12.407: INFO: Got endpoints: latency-svc-7w6xd [1.250248425s] Nov 18 06:41:12.438: INFO: Created: latency-svc-mmpfg Nov 18 06:41:12.455: INFO: Got endpoints: latency-svc-mmpfg [1.22259919s] Nov 18 06:41:12.536: INFO: Created: latency-svc-jvdjz Nov 18 06:41:12.559: INFO: Got endpoints: latency-svc-jvdjz [1.236324519s] Nov 18 06:41:12.594: INFO: Created: latency-svc-vmznr Nov 18 06:41:12.617: INFO: Got endpoints: latency-svc-vmznr [1.212542439s] Nov 18 06:41:12.697: INFO: Created: latency-svc-78q4d Nov 18 06:41:12.701: INFO: Got endpoints: latency-svc-78q4d [1.230841606s] Nov 18 06:41:12.754: INFO: Created: latency-svc-66kbf Nov 18 06:41:12.768: INFO: Got endpoints: latency-svc-66kbf [1.262330505s] Nov 18 06:41:12.838: INFO: Created: latency-svc-5xvdd Nov 18 06:41:12.858: INFO: Got endpoints: latency-svc-5xvdd [1.303278772s] Nov 18 06:41:12.887: INFO: Created: latency-svc-66cpb Nov 18 06:41:12.911: INFO: Got endpoints: latency-svc-66cpb [1.258707878s] Nov 18 06:41:12.995: INFO: Created: latency-svc-l7xww Nov 18 06:41:13.006: INFO: Got endpoints: latency-svc-l7xww [1.257869399s] Nov 18 06:41:13.029: INFO: Created: latency-svc-b5k84 Nov 18 06:41:13.048: INFO: Got endpoints: latency-svc-b5k84 [1.260589753s] Nov 18 06:41:13.079: INFO: Created: latency-svc-f76s4 Nov 18 06:41:13.093: INFO: Got endpoints: latency-svc-f76s4 [1.250603161s] Nov 18 06:41:13.176: INFO: Created: latency-svc-4gqxz Nov 18 06:41:13.189: INFO: Got endpoints: latency-svc-4gqxz [1.236826955s] Nov 18 06:41:13.215: INFO: Created: latency-svc-sn7lj Nov 18 06:41:13.355: INFO: Got endpoints: latency-svc-sn7lj [1.27196247s] Nov 18 06:41:13.359: INFO: Created: latency-svc-t5n7p Nov 18 06:41:13.363: INFO: Got endpoints: latency-svc-t5n7p [1.130022951s] Nov 18 06:41:13.385: INFO: Created: latency-svc-7tgh9 Nov 18 06:41:13.414: INFO: Got endpoints: latency-svc-7tgh9 [1.110040583s] Nov 18 06:41:13.444: INFO: Created: latency-svc-xvk4d Nov 18 06:41:13.505: INFO: Got endpoints: latency-svc-xvk4d [1.097574459s] Nov 18 06:41:13.505: INFO: Created: latency-svc-8qvt4 Nov 18 06:41:13.513: INFO: Got endpoints: latency-svc-8qvt4 [1.058044419s] Nov 18 06:41:13.541: INFO: Created: latency-svc-v2p56 Nov 18 06:41:13.562: INFO: Got endpoints: latency-svc-v2p56 [1.003056512s] Nov 18 06:41:13.583: INFO: Created: latency-svc-4xd8s Nov 18 06:41:13.646: INFO: Got endpoints: latency-svc-4xd8s [1.028878124s] Nov 18 06:41:13.659: INFO: Created: latency-svc-j8r4j Nov 18 06:41:13.671: INFO: Got endpoints: latency-svc-j8r4j [970.565125ms] Nov 18 06:41:13.689: INFO: Created: latency-svc-r8f8d Nov 18 06:41:13.713: INFO: Got endpoints: latency-svc-r8f8d [944.960156ms] Nov 18 06:41:13.746: INFO: Created: latency-svc-w87j8 Nov 18 06:41:13.791: INFO: Got endpoints: latency-svc-w87j8 [933.586429ms] Nov 18 06:41:13.793: INFO: Created: latency-svc-s6m6g Nov 18 06:41:13.810: INFO: Got endpoints: latency-svc-s6m6g [898.393222ms] Nov 18 06:41:13.829: INFO: Created: latency-svc-qgppz Nov 18 06:41:13.846: INFO: Got endpoints: latency-svc-qgppz [839.697065ms] Nov 18 06:41:13.869: INFO: Created: latency-svc-4hjl6 Nov 18 06:41:13.958: INFO: Got endpoints: latency-svc-4hjl6 [909.409619ms] Nov 18 06:41:13.964: INFO: Created: latency-svc-d82f7 Nov 18 06:41:13.980: INFO: Got endpoints: latency-svc-d82f7 [887.176701ms] Nov 18 06:41:13.996: INFO: Created: latency-svc-l9888 Nov 18 06:41:14.008: INFO: Got endpoints: latency-svc-l9888 [819.282724ms] Nov 18 06:41:14.045: INFO: Created: latency-svc-tfx5t Nov 18 06:41:14.114: INFO: Got endpoints: latency-svc-tfx5t [759.412121ms] Nov 18 06:41:14.133: INFO: Created: latency-svc-m8d4d Nov 18 06:41:14.163: INFO: Got endpoints: latency-svc-m8d4d [799.628416ms] Nov 18 06:41:14.276: INFO: Created: latency-svc-pbqls Nov 18 06:41:14.282: INFO: Got endpoints: latency-svc-pbqls [867.950274ms] Nov 18 06:41:14.319: INFO: Created: latency-svc-6q4hb Nov 18 06:41:14.334: INFO: Got endpoints: latency-svc-6q4hb [829.724478ms] Nov 18 06:41:14.355: INFO: Created: latency-svc-lrd59 Nov 18 06:41:14.433: INFO: Got endpoints: latency-svc-lrd59 [919.523726ms] Nov 18 06:41:14.434: INFO: Created: latency-svc-zgqc4 Nov 18 06:41:14.466: INFO: Got endpoints: latency-svc-zgqc4 [903.490752ms] Nov 18 06:41:14.506: INFO: Created: latency-svc-6n5w8 Nov 18 06:41:14.523: INFO: Got endpoints: latency-svc-6n5w8 [876.591945ms] Nov 18 06:41:14.611: INFO: Created: latency-svc-k4pgp Nov 18 06:41:14.625: INFO: Got endpoints: latency-svc-k4pgp [953.074955ms] Nov 18 06:41:14.644: INFO: Created: latency-svc-n2gf7 Nov 18 06:41:14.655: INFO: Got endpoints: latency-svc-n2gf7 [941.464645ms] Nov 18 06:41:14.674: INFO: Created: latency-svc-vf5bx Nov 18 06:41:14.693: INFO: Got endpoints: latency-svc-vf5bx [901.512081ms] Nov 18 06:41:14.755: INFO: Created: latency-svc-wrlvd Nov 18 06:41:14.781: INFO: Created: latency-svc-l28jt Nov 18 06:41:14.781: INFO: Got endpoints: latency-svc-wrlvd [970.989609ms] Nov 18 06:41:14.811: INFO: Got endpoints: latency-svc-l28jt [964.09079ms] Nov 18 06:41:14.841: INFO: Created: latency-svc-l47kp Nov 18 06:41:14.894: INFO: Got endpoints: latency-svc-l47kp [935.556622ms] Nov 18 06:41:14.946: INFO: Created: latency-svc-29cz8 Nov 18 06:41:14.970: INFO: Got endpoints: latency-svc-29cz8 [989.380119ms] Nov 18 06:41:15.042: INFO: Created: latency-svc-2fpsv Nov 18 06:41:15.047: INFO: Got endpoints: latency-svc-2fpsv [1.039050211s] Nov 18 06:41:15.069: INFO: Created: latency-svc-btj68 Nov 18 06:41:15.083: INFO: Got endpoints: latency-svc-btj68 [968.325761ms] Nov 18 06:41:15.100: INFO: Created: latency-svc-7wvkf Nov 18 06:41:15.113: INFO: Got endpoints: latency-svc-7wvkf [949.661111ms] Nov 18 06:41:15.137: INFO: Created: latency-svc-bvkpz Nov 18 06:41:15.197: INFO: Got endpoints: latency-svc-bvkpz [915.232326ms] Nov 18 06:41:15.218: INFO: Created: latency-svc-2jl8q Nov 18 06:41:15.251: INFO: Got endpoints: latency-svc-2jl8q [916.523025ms] Nov 18 06:41:15.291: INFO: Created: latency-svc-v7fh5 Nov 18 06:41:15.366: INFO: Got endpoints: latency-svc-v7fh5 [933.204829ms] Nov 18 06:41:15.371: INFO: Created: latency-svc-dwjl7 Nov 18 06:41:15.384: INFO: Got endpoints: latency-svc-dwjl7 [917.487733ms] Nov 18 06:41:15.434: INFO: Created: latency-svc-7d4tf Nov 18 06:41:15.451: INFO: Got endpoints: latency-svc-7d4tf [928.009847ms] Nov 18 06:41:15.509: INFO: Created: latency-svc-85f6f Nov 18 06:41:15.514: INFO: Got endpoints: latency-svc-85f6f [888.873442ms] Nov 18 06:41:15.569: INFO: Created: latency-svc-4ndjv Nov 18 06:41:15.577: INFO: Got endpoints: latency-svc-4ndjv [922.453111ms] Nov 18 06:41:15.599: INFO: Created: latency-svc-8jhnd Nov 18 06:41:15.608: INFO: Got endpoints: latency-svc-8jhnd [914.898321ms] Nov 18 06:41:15.665: INFO: Created: latency-svc-qskkg Nov 18 06:41:15.675: INFO: Got endpoints: latency-svc-qskkg [893.216999ms] Nov 18 06:41:15.705: INFO: Created: latency-svc-5k9j5 Nov 18 06:41:15.723: INFO: Got endpoints: latency-svc-5k9j5 [911.715761ms] Nov 18 06:41:15.748: INFO: Created: latency-svc-6wrjc Nov 18 06:41:15.759: INFO: Got endpoints: latency-svc-6wrjc [864.77896ms] Nov 18 06:41:15.826: INFO: Created: latency-svc-vwftw Nov 18 06:41:15.829: INFO: Got endpoints: latency-svc-vwftw [859.002028ms] Nov 18 06:41:15.867: INFO: Created: latency-svc-p65fm Nov 18 06:41:15.896: INFO: Got endpoints: latency-svc-p65fm [848.775052ms] Nov 18 06:41:16.001: INFO: Created: latency-svc-58zn2 Nov 18 06:41:16.004: INFO: Got endpoints: latency-svc-58zn2 [920.819128ms] Nov 18 06:41:16.066: INFO: Created: latency-svc-gsnll Nov 18 06:41:16.081: INFO: Got endpoints: latency-svc-gsnll [967.92141ms] Nov 18 06:41:16.131: INFO: Created: latency-svc-xjc84 Nov 18 06:41:16.141: INFO: Got endpoints: latency-svc-xjc84 [943.377463ms] Nov 18 06:41:16.173: INFO: Created: latency-svc-d2bz6 Nov 18 06:41:16.196: INFO: Got endpoints: latency-svc-d2bz6 [944.15599ms] Nov 18 06:41:16.222: INFO: Created: latency-svc-lr82k Nov 18 06:41:16.259: INFO: Got endpoints: latency-svc-lr82k [892.699729ms] Nov 18 06:41:16.289: INFO: Created: latency-svc-r7h5b Nov 18 06:41:16.334: INFO: Got endpoints: latency-svc-r7h5b [949.913936ms] Nov 18 06:41:16.416: INFO: Created: latency-svc-g8q4h Nov 18 06:41:16.420: INFO: Got endpoints: latency-svc-g8q4h [968.676776ms] Nov 18 06:41:16.475: INFO: Created: latency-svc-98qmm Nov 18 06:41:16.491: INFO: Got endpoints: latency-svc-98qmm [977.415598ms] Nov 18 06:41:16.569: INFO: Created: latency-svc-lppnz Nov 18 06:41:16.582: INFO: Got endpoints: latency-svc-lppnz [1.004092201s] Nov 18 06:41:16.605: INFO: Created: latency-svc-8tcp4 Nov 18 06:41:16.618: INFO: Got endpoints: latency-svc-8tcp4 [1.009759291s] Nov 18 06:41:16.640: INFO: Created: latency-svc-9mlkx Nov 18 06:41:16.653: INFO: Got endpoints: latency-svc-9mlkx [978.498944ms] Nov 18 06:41:16.701: INFO: Created: latency-svc-vwbwb Nov 18 06:41:16.705: INFO: Got endpoints: latency-svc-vwbwb [982.144735ms] Nov 18 06:41:16.745: INFO: Created: latency-svc-lbgj6 Nov 18 06:41:16.758: INFO: Got endpoints: latency-svc-lbgj6 [999.116518ms] Nov 18 06:41:16.833: INFO: Created: latency-svc-wgwhl Nov 18 06:41:16.868: INFO: Got endpoints: latency-svc-wgwhl [1.038513772s] Nov 18 06:41:16.869: INFO: Created: latency-svc-jzv9w Nov 18 06:41:16.898: INFO: Got endpoints: latency-svc-jzv9w [1.001357174s] Nov 18 06:41:16.930: INFO: Created: latency-svc-t9qpj Nov 18 06:41:16.984: INFO: Got endpoints: latency-svc-t9qpj [979.661391ms] Nov 18 06:41:17.037: INFO: Created: latency-svc-kl5rk Nov 18 06:41:17.052: INFO: Got endpoints: latency-svc-kl5rk [970.70888ms] Nov 18 06:41:17.073: INFO: Created: latency-svc-c4622 Nov 18 06:41:17.109: INFO: Got endpoints: latency-svc-c4622 [967.694285ms] Nov 18 06:41:17.122: INFO: Created: latency-svc-2pvf2 Nov 18 06:41:17.150: INFO: Got endpoints: latency-svc-2pvf2 [953.882615ms] Nov 18 06:41:17.170: INFO: Created: latency-svc-nvmft Nov 18 06:41:17.183: INFO: Got endpoints: latency-svc-nvmft [923.359108ms] Nov 18 06:41:17.204: INFO: Created: latency-svc-kzplp Nov 18 06:41:17.270: INFO: Got endpoints: latency-svc-kzplp [935.923713ms] Nov 18 06:41:17.307: INFO: Created: latency-svc-zkwqb Nov 18 06:41:17.319: INFO: Got endpoints: latency-svc-zkwqb [898.77911ms] Nov 18 06:41:17.364: INFO: Created: latency-svc-ghhvs Nov 18 06:41:17.450: INFO: Got endpoints: latency-svc-ghhvs [957.849196ms] Nov 18 06:41:17.452: INFO: Created: latency-svc-txvvm Nov 18 06:41:17.463: INFO: Got endpoints: latency-svc-txvvm [880.890795ms] Nov 18 06:41:17.492: INFO: Created: latency-svc-q9blh Nov 18 06:41:17.518: INFO: Got endpoints: latency-svc-q9blh [899.688866ms] Nov 18 06:41:17.617: INFO: Created: latency-svc-p28cm Nov 18 06:41:17.658: INFO: Created: latency-svc-8bdhv Nov 18 06:41:17.659: INFO: Got endpoints: latency-svc-p28cm [1.005925911s] Nov 18 06:41:17.667: INFO: Got endpoints: latency-svc-8bdhv [962.022952ms] Nov 18 06:41:17.690: INFO: Created: latency-svc-8zzdz Nov 18 06:41:17.704: INFO: Got endpoints: latency-svc-8zzdz [945.704518ms] Nov 18 06:41:17.762: INFO: Created: latency-svc-s595b Nov 18 06:41:17.766: INFO: Got endpoints: latency-svc-s595b [898.029035ms] Nov 18 06:41:17.792: INFO: Created: latency-svc-wrxw7 Nov 18 06:41:17.807: INFO: Got endpoints: latency-svc-wrxw7 [908.315161ms] Nov 18 06:41:17.824: INFO: Created: latency-svc-lwglb Nov 18 06:41:17.850: INFO: Got endpoints: latency-svc-lwglb [865.628757ms] Nov 18 06:41:17.904: INFO: Created: latency-svc-cp945 Nov 18 06:41:17.910: INFO: Got endpoints: latency-svc-cp945 [857.507336ms] Nov 18 06:41:17.985: INFO: Created: latency-svc-tshlt Nov 18 06:41:18.002: INFO: Got endpoints: latency-svc-tshlt [892.57157ms] Nov 18 06:41:18.048: INFO: Created: latency-svc-8pqcs Nov 18 06:41:18.054: INFO: Got endpoints: latency-svc-8pqcs [904.250798ms] Nov 18 06:41:18.088: INFO: Created: latency-svc-ffct5 Nov 18 06:41:18.113: INFO: Got endpoints: latency-svc-ffct5 [930.189352ms] Nov 18 06:41:18.134: INFO: Created: latency-svc-qchr9 Nov 18 06:41:18.183: INFO: Got endpoints: latency-svc-qchr9 [912.422672ms] Nov 18 06:41:18.237: INFO: Created: latency-svc-zwncr Nov 18 06:41:18.254: INFO: Got endpoints: latency-svc-zwncr [935.247554ms] Nov 18 06:41:18.305: INFO: Created: latency-svc-qbv4m Nov 18 06:41:18.312: INFO: Got endpoints: latency-svc-qbv4m [862.082937ms] Nov 18 06:41:18.358: INFO: Created: latency-svc-jdsbz Nov 18 06:41:18.365: INFO: Got endpoints: latency-svc-jdsbz [901.830313ms] Nov 18 06:41:18.387: INFO: Created: latency-svc-ds5nh Nov 18 06:41:18.398: INFO: Got endpoints: latency-svc-ds5nh [879.76797ms] Nov 18 06:41:18.468: INFO: Created: latency-svc-lvr7z Nov 18 06:41:18.490: INFO: Created: latency-svc-764ht Nov 18 06:41:18.491: INFO: Got endpoints: latency-svc-lvr7z [831.465131ms] Nov 18 06:41:18.504: INFO: Got endpoints: latency-svc-764ht [836.339979ms] Nov 18 06:41:18.526: INFO: Created: latency-svc-zrff6 Nov 18 06:41:18.541: INFO: Got endpoints: latency-svc-zrff6 [837.036709ms] Nov 18 06:41:18.560: INFO: Created: latency-svc-x9gmm Nov 18 06:41:18.599: INFO: Got endpoints: latency-svc-x9gmm [832.177077ms] Nov 18 06:41:18.608: INFO: Created: latency-svc-zr9dh Nov 18 06:41:18.664: INFO: Got endpoints: latency-svc-zr9dh [857.50295ms] Nov 18 06:41:18.688: INFO: Created: latency-svc-bw2vf Nov 18 06:41:18.724: INFO: Got endpoints: latency-svc-bw2vf [874.021079ms] Nov 18 06:41:18.735: INFO: Created: latency-svc-kp7lq Nov 18 06:41:18.752: INFO: Got endpoints: latency-svc-kp7lq [842.058779ms] Nov 18 06:41:18.800: INFO: Created: latency-svc-gjpnf Nov 18 06:41:18.813: INFO: Got endpoints: latency-svc-gjpnf [810.657245ms] Nov 18 06:41:18.869: INFO: Created: latency-svc-wrs95 Nov 18 06:41:18.871: INFO: Got endpoints: latency-svc-wrs95 [816.253677ms] Nov 18 06:41:18.902: INFO: Created: latency-svc-t4qj6 Nov 18 06:41:18.927: INFO: Got endpoints: latency-svc-t4qj6 [814.076924ms] Nov 18 06:41:18.957: INFO: Created: latency-svc-z5d69 Nov 18 06:41:19.019: INFO: Got endpoints: latency-svc-z5d69 [836.211839ms] Nov 18 06:41:19.021: INFO: Created: latency-svc-6xlrh Nov 18 06:41:19.029: INFO: Got endpoints: latency-svc-6xlrh [774.514012ms] Nov 18 06:41:19.046: INFO: Created: latency-svc-rpwkq Nov 18 06:41:19.059: INFO: Got endpoints: latency-svc-rpwkq [747.056577ms] Nov 18 06:41:19.082: INFO: Created: latency-svc-782qw Nov 18 06:41:19.107: INFO: Got endpoints: latency-svc-782qw [741.995514ms] Nov 18 06:41:19.163: INFO: Created: latency-svc-zwplx Nov 18 06:41:19.166: INFO: Got endpoints: latency-svc-zwplx [767.953554ms] Nov 18 06:41:19.199: INFO: Created: latency-svc-bb422 Nov 18 06:41:19.222: INFO: Got endpoints: latency-svc-bb422 [730.989742ms] Nov 18 06:41:19.253: INFO: Created: latency-svc-9h5pf Nov 18 06:41:19.325: INFO: Got endpoints: latency-svc-9h5pf [821.128376ms] Nov 18 06:41:19.377: INFO: Created: latency-svc-j6526 Nov 18 06:41:19.468: INFO: Got endpoints: latency-svc-j6526 [926.311753ms] Nov 18 06:41:19.469: INFO: Created: latency-svc-b67l7 Nov 18 06:41:19.493: INFO: Got endpoints: latency-svc-b67l7 [893.803315ms] Nov 18 06:41:19.520: INFO: Created: latency-svc-ppxgt Nov 18 06:41:19.551: INFO: Got endpoints: latency-svc-ppxgt [886.493842ms] Nov 18 06:41:19.619: INFO: Created: latency-svc-68n4t Nov 18 06:41:19.622: INFO: Got endpoints: latency-svc-68n4t [897.701258ms] Nov 18 06:41:19.678: INFO: Created: latency-svc-f5v7l Nov 18 06:41:19.687: INFO: Got endpoints: latency-svc-f5v7l [934.50177ms] Nov 18 06:41:19.707: INFO: Created: latency-svc-wc2b8 Nov 18 06:41:19.739: INFO: Got endpoints: latency-svc-wc2b8 [926.561919ms] Nov 18 06:41:19.747: INFO: Created: latency-svc-fjsg5 Nov 18 06:41:19.761: INFO: Got endpoints: latency-svc-fjsg5 [890.121869ms] Nov 18 06:41:19.783: INFO: Created: latency-svc-hv29c Nov 18 06:41:19.796: INFO: Got endpoints: latency-svc-hv29c [868.458231ms] Nov 18 06:41:19.813: INFO: Created: latency-svc-cvbc5 Nov 18 06:41:19.827: INFO: Got endpoints: latency-svc-cvbc5 [807.078679ms] Nov 18 06:41:19.888: INFO: Created: latency-svc-sssgx Nov 18 06:41:19.891: INFO: Got endpoints: latency-svc-sssgx [861.772767ms] Nov 18 06:41:19.931: INFO: Created: latency-svc-kc9bg Nov 18 06:41:19.941: INFO: Got endpoints: latency-svc-kc9bg [881.46185ms] Nov 18 06:41:19.958: INFO: Created: latency-svc-jzh7h Nov 18 06:41:20.071: INFO: Got endpoints: latency-svc-jzh7h [964.206337ms] Nov 18 06:41:20.075: INFO: Created: latency-svc-2ccjs Nov 18 06:41:20.086: INFO: Got endpoints: latency-svc-2ccjs [919.596338ms] Nov 18 06:41:20.109: INFO: Created: latency-svc-pbj6c Nov 18 06:41:20.123: INFO: Got endpoints: latency-svc-pbj6c [900.424399ms] Nov 18 06:41:20.140: INFO: Created: latency-svc-sqmzk Nov 18 06:41:20.153: INFO: Got endpoints: latency-svc-sqmzk [828.318414ms] Nov 18 06:41:20.217: INFO: Created: latency-svc-hmf5h Nov 18 06:41:20.220: INFO: Got endpoints: latency-svc-hmf5h [752.357935ms] Nov 18 06:41:20.258: INFO: Created: latency-svc-8hn42 Nov 18 06:41:20.296: INFO: Got endpoints: latency-svc-8hn42 [803.342689ms] Nov 18 06:41:21.495: INFO: Created: latency-svc-hprh6 Nov 18 06:41:22.457: INFO: Got endpoints: latency-svc-hprh6 [2.905725472s] Nov 18 06:41:22.473: INFO: Created: latency-svc-lkr2m Nov 18 06:41:22.501: INFO: Got endpoints: latency-svc-lkr2m [2.878253236s] Nov 18 06:41:22.538: INFO: Created: latency-svc-mjlrt Nov 18 06:41:22.606: INFO: Got endpoints: latency-svc-mjlrt [2.918615922s] Nov 18 06:41:22.611: INFO: Created: latency-svc-c562j Nov 18 06:41:22.660: INFO: Got endpoints: latency-svc-c562j [2.920797537s] Nov 18 06:41:22.695: INFO: Created: latency-svc-qbmjh Nov 18 06:41:22.723: INFO: Got endpoints: latency-svc-qbmjh [2.961863425s] Nov 18 06:41:22.736: INFO: Created: latency-svc-ljjns Nov 18 06:41:22.748: INFO: Got endpoints: latency-svc-ljjns [2.951282923s] Nov 18 06:41:22.792: INFO: Created: latency-svc-2rwvq Nov 18 06:41:22.807: INFO: Got endpoints: latency-svc-2rwvq [2.980482698s] Nov 18 06:41:22.856: INFO: Created: latency-svc-9nn7c Nov 18 06:41:22.860: INFO: Got endpoints: latency-svc-9nn7c [2.968672313s] Nov 18 06:41:22.883: INFO: Created: latency-svc-klsnh Nov 18 06:41:22.914: INFO: Got endpoints: latency-svc-klsnh [2.972946666s] Nov 18 06:41:22.939: INFO: Created: latency-svc-lwhpr Nov 18 06:41:22.953: INFO: Got endpoints: latency-svc-lwhpr [2.881080158s] Nov 18 06:41:22.994: INFO: Created: latency-svc-56v4v Nov 18 06:41:23.014: INFO: Got endpoints: latency-svc-56v4v [2.928065699s] Nov 18 06:41:23.045: INFO: Created: latency-svc-s62wl Nov 18 06:41:23.062: INFO: Got endpoints: latency-svc-s62wl [2.938660876s] Nov 18 06:41:23.080: INFO: Created: latency-svc-9jff8 Nov 18 06:41:23.121: INFO: Got endpoints: latency-svc-9jff8 [2.967508275s] Nov 18 06:41:23.122: INFO: Created: latency-svc-kdvdz Nov 18 06:41:23.150: INFO: Got endpoints: latency-svc-kdvdz [2.929244583s] Nov 18 06:41:23.181: INFO: Created: latency-svc-2qsh9 Nov 18 06:41:23.194: INFO: Got endpoints: latency-svc-2qsh9 [2.897465549s] Nov 18 06:41:23.217: INFO: Created: latency-svc-lb2rw Nov 18 06:41:23.245: INFO: Got endpoints: latency-svc-lb2rw [787.718499ms] Nov 18 06:41:23.260: INFO: Created: latency-svc-kjvn2 Nov 18 06:41:23.273: INFO: Got endpoints: latency-svc-kjvn2 [772.18719ms] Nov 18 06:41:23.296: INFO: Created: latency-svc-t5sk5 Nov 18 06:41:23.310: INFO: Got endpoints: latency-svc-t5sk5 [704.121893ms] Nov 18 06:41:23.332: INFO: Created: latency-svc-8l65q Nov 18 06:41:23.391: INFO: Got endpoints: latency-svc-8l65q [730.022923ms] Nov 18 06:41:23.392: INFO: Created: latency-svc-gdmqz Nov 18 06:41:23.399: INFO: Got endpoints: latency-svc-gdmqz [675.944218ms] Nov 18 06:41:23.420: INFO: Created: latency-svc-f7lhl Nov 18 06:41:23.431: INFO: Got endpoints: latency-svc-f7lhl [683.012087ms] Nov 18 06:41:23.450: INFO: Created: latency-svc-sskx7 Nov 18 06:41:23.468: INFO: Got endpoints: latency-svc-sskx7 [660.64739ms] Nov 18 06:41:23.488: INFO: Created: latency-svc-kpmkj Nov 18 06:41:23.588: INFO: Got endpoints: latency-svc-kpmkj [727.952523ms] Nov 18 06:41:23.615: INFO: Created: latency-svc-mpssb Nov 18 06:41:23.655: INFO: Got endpoints: latency-svc-mpssb [741.303078ms] Nov 18 06:41:23.684: INFO: Created: latency-svc-6llcq Nov 18 06:41:23.718: INFO: Got endpoints: latency-svc-6llcq [765.514032ms] Nov 18 06:41:23.728: INFO: Created: latency-svc-lb62b Nov 18 06:41:23.745: INFO: Got endpoints: latency-svc-lb62b [730.07718ms] Nov 18 06:41:23.764: INFO: Created: latency-svc-2r6wb Nov 18 06:41:23.775: INFO: Got endpoints: latency-svc-2r6wb [713.186558ms] Nov 18 06:41:23.794: INFO: Created: latency-svc-drrsn Nov 18 06:41:23.817: INFO: Got endpoints: latency-svc-drrsn [695.78395ms] Nov 18 06:41:23.867: INFO: Created: latency-svc-9vhr6 Nov 18 06:41:23.877: INFO: Got endpoints: latency-svc-9vhr6 [727.530902ms] Nov 18 06:41:23.894: INFO: Created: latency-svc-9qgjb Nov 18 06:41:23.907: INFO: Got endpoints: latency-svc-9qgjb [712.640728ms] Nov 18 06:41:23.932: INFO: Created: latency-svc-6gzqf Nov 18 06:41:23.944: INFO: Got endpoints: latency-svc-6gzqf [697.945178ms] Nov 18 06:41:23.962: INFO: Created: latency-svc-66s66 Nov 18 06:41:23.994: INFO: Got endpoints: latency-svc-66s66 [720.293643ms] Nov 18 06:41:24.003: INFO: Created: latency-svc-fh8hc Nov 18 06:41:24.016: INFO: Got endpoints: latency-svc-fh8hc [706.003632ms] Nov 18 06:41:24.034: INFO: Created: latency-svc-g2nrn Nov 18 06:41:24.047: INFO: Got endpoints: latency-svc-g2nrn [655.657638ms] Nov 18 06:41:24.068: INFO: Created: latency-svc-j47fc Nov 18 06:41:24.083: INFO: Got endpoints: latency-svc-j47fc [683.992595ms] Nov 18 06:41:24.084: INFO: Latencies: [79.485137ms 122.793387ms 167.696927ms 175.024414ms 217.568098ms 264.816027ms 366.814872ms 391.780351ms 440.975374ms 601.2652ms 655.657638ms 660.64739ms 675.944218ms 683.012087ms 683.992595ms 686.288176ms 695.78395ms 697.945178ms 704.121893ms 706.003632ms 712.640728ms 713.186558ms 720.293643ms 724.452083ms 727.530902ms 727.952523ms 730.022923ms 730.07718ms 730.989742ms 741.303078ms 741.995514ms 747.056577ms 752.357935ms 759.412121ms 765.514032ms 767.953554ms 772.18719ms 774.514012ms 787.718499ms 799.628416ms 803.342689ms 807.078679ms 810.657245ms 814.076924ms 816.253677ms 819.282724ms 821.128376ms 828.318414ms 829.724478ms 831.465131ms 832.177077ms 835.321148ms 836.211839ms 836.339979ms 837.036709ms 839.697065ms 842.058779ms 848.775052ms 857.50295ms 857.507336ms 859.002028ms 861.772767ms 862.082937ms 864.77896ms 865.628757ms 867.950274ms 868.458231ms 874.021079ms 876.591945ms 879.76797ms 880.890795ms 881.46185ms 884.782211ms 886.493842ms 887.176701ms 888.873442ms 890.121869ms 892.57157ms 892.699729ms 893.216999ms 893.803315ms 897.701258ms 898.029035ms 898.393222ms 898.77911ms 899.688866ms 900.424399ms 901.512081ms 901.830313ms 903.490752ms 904.250798ms 908.315161ms 909.409619ms 911.715761ms 912.422672ms 914.898321ms 915.232326ms 916.523025ms 917.487733ms 919.523726ms 919.596338ms 920.819128ms 922.453111ms 923.359108ms 926.311753ms 926.561919ms 928.009847ms 930.189352ms 933.204829ms 933.586429ms 934.50177ms 935.247554ms 935.556622ms 935.923713ms 941.464645ms 943.377463ms 944.15599ms 944.960156ms 945.704518ms 949.661111ms 949.913936ms 953.074955ms 953.882615ms 957.849196ms 962.022952ms 963.743273ms 964.09079ms 964.206337ms 967.694285ms 967.92141ms 968.325761ms 968.676776ms 970.565125ms 970.70888ms 970.989609ms 977.415598ms 978.498944ms 979.661391ms 982.144735ms 989.380119ms 999.116518ms 1.001357174s 1.003056512s 1.004092201s 1.005925911s 1.009759291s 1.017096876s 1.028878124s 1.038513772s 1.039050211s 1.058044419s 1.065226731s 1.070386343s 1.097574459s 1.110040583s 1.113530951s 1.119387741s 1.130022951s 1.130306126s 1.144424081s 1.148596645s 1.150113143s 1.158750769s 1.174904067s 1.176227637s 1.180336039s 1.183542989s 1.200958682s 1.212542439s 1.213685928s 1.22259919s 1.229634352s 1.230841606s 1.236324519s 1.236826955s 1.250248425s 1.250603161s 1.257869399s 1.257968077s 1.258707878s 1.260589753s 1.262330505s 1.27196247s 1.275259017s 1.303278772s 2.878253236s 2.881080158s 2.897465549s 2.905725472s 2.918615922s 2.920797537s 2.928065699s 2.929244583s 2.938660876s 2.951282923s 2.961863425s 2.967508275s 2.968672313s 2.972946666s 2.980482698s] Nov 18 06:41:24.086: INFO: 50 %ile: 919.596338ms Nov 18 06:41:24.086: INFO: 90 %ile: 1.260589753s Nov 18 06:41:24.086: INFO: 99 %ile: 2.972946666s Nov 18 06:41:24.086: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:41:24.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-4270" for this suite. • [SLOW TEST:18.768 seconds] [sig-network] Service endpoints latency /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":303,"completed":73,"skipped":1327,"failed":0} SSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:41:24.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Nov 18 06:41:24.238: INFO: Waiting up to 1m0s for all nodes to be ready Nov 18 06:42:24.450: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create pods that use 2/3 of node resources. Nov 18 06:42:25.708: INFO: Created pod: pod0-sched-preemption-low-priority Nov 18 06:42:26.105: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a critical pod that use same resources as that of a lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:43:04.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-7532" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:100.554 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates lower priority pod preemption by critical pod [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":303,"completed":74,"skipped":1336,"failed":0} SSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:43:04.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Nov 18 06:43:09.339: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:43:09.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-528" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":303,"completed":75,"skipped":1339,"failed":0} SSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:43:09.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 18 06:43:09.733: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-e843acab-a52b-48cc-a648-a1ddca9c3a05" in namespace "security-context-test-6101" to be "Succeeded or Failed" Nov 18 06:43:09.815: INFO: Pod "alpine-nnp-false-e843acab-a52b-48cc-a648-a1ddca9c3a05": Phase="Pending", Reason="", readiness=false. Elapsed: 82.220608ms Nov 18 06:43:11.822: INFO: Pod "alpine-nnp-false-e843acab-a52b-48cc-a648-a1ddca9c3a05": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08859462s Nov 18 06:43:15.597: INFO: Pod "alpine-nnp-false-e843acab-a52b-48cc-a648-a1ddca9c3a05": Phase="Running", Reason="", readiness=true. Elapsed: 5.863439326s Nov 18 06:43:17.604: INFO: Pod "alpine-nnp-false-e843acab-a52b-48cc-a648-a1ddca9c3a05": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.870744202s Nov 18 06:43:17.604: INFO: Pod "alpine-nnp-false-e843acab-a52b-48cc-a648-a1ddca9c3a05" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:43:17.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6101" for this suite. • [SLOW TEST:8.255 seconds] [k8s.io] Security Context /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when creating containers with AllowPrivilegeEscalation /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":76,"skipped":1346,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:43:17.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 18 06:43:17.777: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:43:18.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8823" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":303,"completed":77,"skipped":1353,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:43:18.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Nov 18 06:43:29.163: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Nov 18 06:43:29.204: INFO: Pod pod-with-prestop-http-hook still exists Nov 18 06:43:31.205: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Nov 18 06:43:31.214: INFO: Pod pod-with-prestop-http-hook still exists Nov 18 06:43:33.205: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Nov 18 06:43:33.212: INFO: Pod pod-with-prestop-http-hook still exists Nov 18 06:43:35.205: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Nov 18 06:43:35.214: INFO: Pod pod-with-prestop-http-hook still exists Nov 18 06:43:37.205: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Nov 18 06:43:37.212: INFO: Pod pod-with-prestop-http-hook still exists Nov 18 06:43:39.205: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Nov 18 06:43:39.214: INFO: Pod pod-with-prestop-http-hook still exists Nov 18 06:43:41.205: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Nov 18 06:43:41.213: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:43:41.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1172" for this suite. • [SLOW TEST:22.247 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":303,"completed":78,"skipped":1382,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:43:41.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check is all data is printed [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 18 06:43:41.318: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config version' Nov 18 06:43:42.643: INFO: stderr: "" Nov 18 06:43:42.643: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.5-rc.0\", GitCommit:\"9546a0e88d62afd8fdf50c4ed91514d5192db450\", GitTreeState:\"clean\", BuildDate:\"2020-11-11T13:36:54Z\", GoVersion:\"go1.15.2\", Compiler:\"gc\", Platform:\"linux/arm64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"19\", GitVersion:\"v1.19.0\", GitCommit:\"e19964183377d0ec2052d1f1fa930c4d7575bd50\", GitTreeState:\"clean\", BuildDate:\"2020-08-28T22:11:08Z\", GoVersion:\"go1.15\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:43:42.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3353" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":303,"completed":79,"skipped":1404,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:43:42.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Nov 18 06:43:47.403: INFO: Successfully updated pod "labelsupdate63381e83-88d9-4ff3-9b2d-107c640454b9" [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:43:51.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3495" for this suite. • [SLOW TEST:8.758 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":303,"completed":80,"skipped":1428,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:43:51.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-45747274-f45d-4bc6-92e4-17eabe6db4e7 STEP: Creating a pod to test consume configMaps Nov 18 06:43:51.542: INFO: Waiting up to 5m0s for pod "pod-configmaps-b6ba5d52-cf9d-4c38-9c0a-62c6a29597e8" in namespace "configmap-8926" to be "Succeeded or Failed" Nov 18 06:43:51.558: INFO: Pod "pod-configmaps-b6ba5d52-cf9d-4c38-9c0a-62c6a29597e8": Phase="Pending", Reason="", readiness=false. Elapsed: 15.850526ms Nov 18 06:43:53.654: INFO: Pod "pod-configmaps-b6ba5d52-cf9d-4c38-9c0a-62c6a29597e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111670392s Nov 18 06:43:55.660: INFO: Pod "pod-configmaps-b6ba5d52-cf9d-4c38-9c0a-62c6a29597e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.117750948s STEP: Saw pod success Nov 18 06:43:55.661: INFO: Pod "pod-configmaps-b6ba5d52-cf9d-4c38-9c0a-62c6a29597e8" satisfied condition "Succeeded or Failed" Nov 18 06:43:55.709: INFO: Trying to get logs from node leguer-worker pod pod-configmaps-b6ba5d52-cf9d-4c38-9c0a-62c6a29597e8 container configmap-volume-test: STEP: delete the pod Nov 18 06:43:55.764: INFO: Waiting for pod pod-configmaps-b6ba5d52-cf9d-4c38-9c0a-62c6a29597e8 to disappear Nov 18 06:43:55.776: INFO: Pod pod-configmaps-b6ba5d52-cf9d-4c38-9c0a-62c6a29597e8 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:43:55.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8926" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":303,"completed":81,"skipped":1443,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:43:55.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:43:56.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6789" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":303,"completed":82,"skipped":1449,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:43:56.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-f294289e-ba97-4372-aacc-8e83af87bdea STEP: Creating a pod to test consume configMaps Nov 18 06:43:56.467: INFO: Waiting up to 5m0s for pod "pod-configmaps-24998f2f-ce2d-4cf5-8c75-3a2e820ae8c9" in namespace "configmap-5951" to be "Succeeded or Failed" Nov 18 06:43:56.500: INFO: Pod "pod-configmaps-24998f2f-ce2d-4cf5-8c75-3a2e820ae8c9": Phase="Pending", Reason="", readiness=false. Elapsed: 33.332846ms Nov 18 06:43:58.509: INFO: Pod "pod-configmaps-24998f2f-ce2d-4cf5-8c75-3a2e820ae8c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042218663s Nov 18 06:44:00.518: INFO: Pod "pod-configmaps-24998f2f-ce2d-4cf5-8c75-3a2e820ae8c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050882104s STEP: Saw pod success Nov 18 06:44:00.518: INFO: Pod "pod-configmaps-24998f2f-ce2d-4cf5-8c75-3a2e820ae8c9" satisfied condition "Succeeded or Failed" Nov 18 06:44:00.523: INFO: Trying to get logs from node leguer-worker2 pod pod-configmaps-24998f2f-ce2d-4cf5-8c75-3a2e820ae8c9 container configmap-volume-test: STEP: delete the pod Nov 18 06:44:00.709: INFO: Waiting for pod pod-configmaps-24998f2f-ce2d-4cf5-8c75-3a2e820ae8c9 to disappear Nov 18 06:44:00.753: INFO: Pod pod-configmaps-24998f2f-ce2d-4cf5-8c75-3a2e820ae8c9 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:44:00.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5951" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":83,"skipped":1480,"failed":0} ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:44:00.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's args Nov 18 06:44:00.924: INFO: Waiting up to 5m0s for pod "var-expansion-503b6254-b898-4bc8-9a0f-9020086a1845" in namespace "var-expansion-6278" to be "Succeeded or Failed" Nov 18 06:44:00.961: INFO: Pod "var-expansion-503b6254-b898-4bc8-9a0f-9020086a1845": Phase="Pending", Reason="", readiness=false. Elapsed: 36.01956ms Nov 18 06:44:02.969: INFO: Pod "var-expansion-503b6254-b898-4bc8-9a0f-9020086a1845": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043987707s Nov 18 06:44:04.977: INFO: Pod "var-expansion-503b6254-b898-4bc8-9a0f-9020086a1845": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052386583s Nov 18 06:44:06.984: INFO: Pod "var-expansion-503b6254-b898-4bc8-9a0f-9020086a1845": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.059264008s STEP: Saw pod success Nov 18 06:44:06.984: INFO: Pod "var-expansion-503b6254-b898-4bc8-9a0f-9020086a1845" satisfied condition "Succeeded or Failed" Nov 18 06:44:06.989: INFO: Trying to get logs from node leguer-worker2 pod var-expansion-503b6254-b898-4bc8-9a0f-9020086a1845 container dapi-container: STEP: delete the pod Nov 18 06:44:07.057: INFO: Waiting for pod var-expansion-503b6254-b898-4bc8-9a0f-9020086a1845 to disappear Nov 18 06:44:07.075: INFO: Pod var-expansion-503b6254-b898-4bc8-9a0f-9020086a1845 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:44:07.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6278" for this suite. • [SLOW TEST:6.325 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":303,"completed":84,"skipped":1480,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:44:07.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-91041bba-610b-4cde-9390-e47d0e23069b STEP: Creating a pod to test consume configMaps Nov 18 06:44:07.229: INFO: Waiting up to 5m0s for pod "pod-configmaps-e202612d-5bfd-464f-ab4d-8cb337cc030a" in namespace "configmap-3182" to be "Succeeded or Failed" Nov 18 06:44:07.267: INFO: Pod "pod-configmaps-e202612d-5bfd-464f-ab4d-8cb337cc030a": Phase="Pending", Reason="", readiness=false. Elapsed: 37.883614ms Nov 18 06:44:09.325: INFO: Pod "pod-configmaps-e202612d-5bfd-464f-ab4d-8cb337cc030a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095704728s Nov 18 06:44:11.332: INFO: Pod "pod-configmaps-e202612d-5bfd-464f-ab4d-8cb337cc030a": Phase="Running", Reason="", readiness=true. Elapsed: 4.102762255s Nov 18 06:44:13.339: INFO: Pod "pod-configmaps-e202612d-5bfd-464f-ab4d-8cb337cc030a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.110094966s STEP: Saw pod success Nov 18 06:44:13.340: INFO: Pod "pod-configmaps-e202612d-5bfd-464f-ab4d-8cb337cc030a" satisfied condition "Succeeded or Failed" Nov 18 06:44:13.345: INFO: Trying to get logs from node leguer-worker2 pod pod-configmaps-e202612d-5bfd-464f-ab4d-8cb337cc030a container configmap-volume-test: STEP: delete the pod Nov 18 06:44:13.411: INFO: Waiting for pod pod-configmaps-e202612d-5bfd-464f-ab4d-8cb337cc030a to disappear Nov 18 06:44:13.423: INFO: Pod pod-configmaps-e202612d-5bfd-464f-ab4d-8cb337cc030a no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:44:13.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3182" for this suite. • [SLOW TEST:6.342 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":85,"skipped":1515,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:44:13.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 18 06:44:15.823: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 18 06:44:17.842: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741278655, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741278655, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741278655, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741278655, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 18 06:44:19.859: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741278655, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741278655, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741278655, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741278655, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 18 06:44:22.884: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Nov 18 06:44:22.914: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:44:22.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9885" for this suite. STEP: Destroying namespace "webhook-9885-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.636 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":303,"completed":86,"skipped":1532,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Events /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:44:23.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Nov 18 06:44:27.251: INFO: &Pod{ObjectMeta:{send-events-fb658cc8-b5cf-4036-b5e3-055b1225ac4c events-2907 /api/v1/namespaces/events-2907/pods/send-events-fb658cc8-b5cf-4036-b5e3-055b1225ac4c b52f916a-dd83-40bb-aace-fe049403a33e 11990414 0 2020-11-18 06:44:23 +0000 UTC map[name:foo time:190872142] map[] [] [] [{e2e.test Update v1 2020-11-18 06:44:23 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-11-18 06:44:26 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.144\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qrflm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qrflm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:p,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qrflm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:44:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:44:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:44:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 06:44:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.17,PodIP:10.244.1.144,StartTime:2020-11-18 06:44:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-11-18 06:44:26 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://cca3eab3cade8dcffca096d1ef68c96df9f0d78184bf330dafbd4a1830001cb9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.144,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Nov 18 06:44:29.265: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Nov 18 06:44:31.275: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:44:31.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-2907" for this suite. • [SLOW TEST:8.255 seconds] [k8s.io] [sig-node] Events /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":303,"completed":87,"skipped":1554,"failed":0} SSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:44:31.333: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:45:04.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7364" for this suite. • [SLOW TEST:33.215 seconds] [k8s.io] Container Runtime /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when starting a container that exits /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":303,"completed":88,"skipped":1557,"failed":0} SS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:45:04.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:45:08.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7705" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":303,"completed":89,"skipped":1559,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:45:08.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Nov 18 06:45:08.873: INFO: >>> kubeConfig: /root/.kube/config Nov 18 06:45:29.742: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:46:54.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3800" for this suite. • [SLOW TEST:105.711 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":303,"completed":90,"skipped":1579,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:46:54.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:47:05.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9710" for this suite. • [SLOW TEST:11.165 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":303,"completed":91,"skipped":1591,"failed":0} SSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:47:05.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap that has name configmap-test-emptyKey-bfd9c6c9-9bbf-4b29-ad2c-7b7be76c1d28 [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:47:05.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3127" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":303,"completed":92,"skipped":1596,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:47:05.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Nov 18 06:47:05.964: INFO: Waiting up to 5m0s for pod "downwardapi-volume-78e467b7-64f2-46e8-b50e-5388f8e9ab58" in namespace "downward-api-893" to be "Succeeded or Failed" Nov 18 06:47:05.973: INFO: Pod "downwardapi-volume-78e467b7-64f2-46e8-b50e-5388f8e9ab58": Phase="Pending", Reason="", readiness=false. Elapsed: 8.840109ms Nov 18 06:47:07.982: INFO: Pod "downwardapi-volume-78e467b7-64f2-46e8-b50e-5388f8e9ab58": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017838801s Nov 18 06:47:09.994: INFO: Pod "downwardapi-volume-78e467b7-64f2-46e8-b50e-5388f8e9ab58": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02933292s STEP: Saw pod success Nov 18 06:47:09.994: INFO: Pod "downwardapi-volume-78e467b7-64f2-46e8-b50e-5388f8e9ab58" satisfied condition "Succeeded or Failed" Nov 18 06:47:09.999: INFO: Trying to get logs from node leguer-worker2 pod downwardapi-volume-78e467b7-64f2-46e8-b50e-5388f8e9ab58 container client-container: STEP: delete the pod Nov 18 06:47:10.066: INFO: Waiting for pod downwardapi-volume-78e467b7-64f2-46e8-b50e-5388f8e9ab58 to disappear Nov 18 06:47:10.071: INFO: Pod downwardapi-volume-78e467b7-64f2-46e8-b50e-5388f8e9ab58 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:47:10.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-893" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":303,"completed":93,"skipped":1624,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:47:10.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-d7949d4c-2ab6-4bb3-bc13-655902a8dbb8 STEP: Creating a pod to test consume secrets Nov 18 06:47:10.336: INFO: Waiting up to 5m0s for pod "pod-secrets-f91f7484-2bf8-4aa3-b2f6-307bb7fca9a9" in namespace "secrets-2021" to be "Succeeded or Failed" Nov 18 06:47:10.371: INFO: Pod "pod-secrets-f91f7484-2bf8-4aa3-b2f6-307bb7fca9a9": Phase="Pending", Reason="", readiness=false. Elapsed: 34.533221ms Nov 18 06:47:12.537: INFO: Pod "pod-secrets-f91f7484-2bf8-4aa3-b2f6-307bb7fca9a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.201079695s Nov 18 06:47:14.544: INFO: Pod "pod-secrets-f91f7484-2bf8-4aa3-b2f6-307bb7fca9a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.207942547s STEP: Saw pod success Nov 18 06:47:14.545: INFO: Pod "pod-secrets-f91f7484-2bf8-4aa3-b2f6-307bb7fca9a9" satisfied condition "Succeeded or Failed" Nov 18 06:47:14.550: INFO: Trying to get logs from node leguer-worker2 pod pod-secrets-f91f7484-2bf8-4aa3-b2f6-307bb7fca9a9 container secret-volume-test: STEP: delete the pod Nov 18 06:47:14.620: INFO: Waiting for pod pod-secrets-f91f7484-2bf8-4aa3-b2f6-307bb7fca9a9 to disappear Nov 18 06:47:14.659: INFO: Pod pod-secrets-f91f7484-2bf8-4aa3-b2f6-307bb7fca9a9 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:47:14.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2021" for this suite. STEP: Destroying namespace "secret-namespace-2052" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":303,"completed":94,"skipped":1740,"failed":0} SSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:47:14.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service nodeport-test with type=NodePort in namespace services-828 STEP: creating replication controller nodeport-test in namespace services-828 I1118 06:47:14.946701 10 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-828, replica count: 2 I1118 06:47:17.998068 10 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1118 06:47:20.999051 10 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 18 06:47:20.999: INFO: Creating new exec pod Nov 18 06:47:26.062: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-828 execpodhnfsk -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Nov 18 06:47:34.852: INFO: stderr: "I1118 06:47:34.734461 727 log.go:181] (0x40001bc370) (0x400100a000) Create stream\nI1118 06:47:34.739612 727 log.go:181] (0x40001bc370) (0x400100a000) Stream added, broadcasting: 1\nI1118 06:47:34.751149 727 log.go:181] (0x40001bc370) Reply frame received for 1\nI1118 06:47:34.751988 727 log.go:181] (0x40001bc370) (0x400100a0a0) Create stream\nI1118 06:47:34.752070 727 log.go:181] (0x40001bc370) (0x400100a0a0) Stream added, broadcasting: 3\nI1118 06:47:34.753839 727 log.go:181] (0x40001bc370) Reply frame received for 3\nI1118 06:47:34.754157 727 log.go:181] (0x40001bc370) (0x400024a000) Create stream\nI1118 06:47:34.754221 727 log.go:181] (0x40001bc370) (0x400024a000) Stream added, broadcasting: 5\nI1118 06:47:34.755383 727 log.go:181] (0x40001bc370) Reply frame received for 5\nI1118 06:47:34.832088 727 log.go:181] (0x40001bc370) Data frame received for 5\nI1118 06:47:34.832347 727 log.go:181] (0x400024a000) (5) Data frame handling\nI1118 06:47:34.832637 727 log.go:181] (0x40001bc370) Data frame received for 3\nI1118 06:47:34.832736 727 log.go:181] (0x400100a0a0) (3) Data frame handling\n+ nc -zv -t -w 2 nodeport-test 80\nI1118 06:47:34.833756 727 log.go:181] (0x400024a000) (5) Data frame sent\nI1118 06:47:34.833990 727 log.go:181] (0x40001bc370) Data frame received for 1\nI1118 06:47:34.834074 727 log.go:181] (0x400100a000) (1) Data frame handling\nI1118 06:47:34.834144 727 log.go:181] (0x400100a000) (1) Data frame sent\nI1118 06:47:34.834249 727 log.go:181] (0x40001bc370) Data frame received for 5\nI1118 06:47:34.834321 727 log.go:181] (0x400024a000) (5) Data frame handling\nI1118 06:47:34.834411 727 log.go:181] (0x400024a000) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI1118 06:47:34.834481 727 log.go:181] (0x40001bc370) Data frame received for 5\nI1118 06:47:34.834546 727 log.go:181] (0x400024a000) (5) Data frame handling\nI1118 06:47:34.836210 727 log.go:181] (0x40001bc370) (0x400100a000) Stream removed, broadcasting: 1\nI1118 06:47:34.839524 727 log.go:181] (0x40001bc370) Go away received\nI1118 06:47:34.841701 727 log.go:181] (0x40001bc370) (0x400100a000) Stream removed, broadcasting: 1\nI1118 06:47:34.842448 727 log.go:181] (0x40001bc370) (0x400100a0a0) Stream removed, broadcasting: 3\nI1118 06:47:34.842961 727 log.go:181] (0x40001bc370) (0x400024a000) Stream removed, broadcasting: 5\n" Nov 18 06:47:34.853: INFO: stdout: "" Nov 18 06:47:34.858: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-828 execpodhnfsk -- /bin/sh -x -c nc -zv -t -w 2 10.104.93.63 80' Nov 18 06:47:36.467: INFO: stderr: "I1118 06:47:36.317003 747 log.go:181] (0x4000244000) (0x40006e4320) Create stream\nI1118 06:47:36.322466 747 log.go:181] (0x4000244000) (0x40006e4320) Stream added, broadcasting: 1\nI1118 06:47:36.336304 747 log.go:181] (0x4000244000) Reply frame received for 1\nI1118 06:47:36.336910 747 log.go:181] (0x4000244000) (0x400023c0a0) Create stream\nI1118 06:47:36.336969 747 log.go:181] (0x4000244000) (0x400023c0a0) Stream added, broadcasting: 3\nI1118 06:47:36.338530 747 log.go:181] (0x4000244000) Reply frame received for 3\nI1118 06:47:36.338868 747 log.go:181] (0x4000244000) (0x4000c8a000) Create stream\nI1118 06:47:36.338946 747 log.go:181] (0x4000244000) (0x4000c8a000) Stream added, broadcasting: 5\nI1118 06:47:36.340132 747 log.go:181] (0x4000244000) Reply frame received for 5\nI1118 06:47:36.433190 747 log.go:181] (0x4000244000) Data frame received for 3\nI1118 06:47:36.433655 747 log.go:181] (0x4000244000) Data frame received for 5\nI1118 06:47:36.433858 747 log.go:181] (0x4000c8a000) (5) Data frame handling\nI1118 06:47:36.434063 747 log.go:181] (0x4000244000) Data frame received for 1\nI1118 06:47:36.434162 747 log.go:181] (0x40006e4320) (1) Data frame handling\nI1118 06:47:36.434655 747 log.go:181] (0x400023c0a0) (3) Data frame handling\nI1118 06:47:36.434967 747 log.go:181] (0x4000c8a000) (5) Data frame sent\n+ nc -zv -t -w 2 10.104.93.63 80\nConnection to 10.104.93.63 80 port [tcp/http] succeeded!\nI1118 06:47:36.435330 747 log.go:181] (0x40006e4320) (1) Data frame sent\nI1118 06:47:36.435427 747 log.go:181] (0x4000244000) Data frame received for 5\nI1118 06:47:36.435511 747 log.go:181] (0x4000c8a000) (5) Data frame handling\nI1118 06:47:36.437497 747 log.go:181] (0x4000244000) (0x40006e4320) Stream removed, broadcasting: 1\nI1118 06:47:36.440559 747 log.go:181] (0x4000244000) Go away received\nI1118 06:47:36.458115 747 log.go:181] (0x4000244000) (0x40006e4320) Stream removed, broadcasting: 1\nI1118 06:47:36.458352 747 log.go:181] (0x4000244000) (0x400023c0a0) Stream removed, broadcasting: 3\nI1118 06:47:36.458500 747 log.go:181] (0x4000244000) (0x4000c8a000) Stream removed, broadcasting: 5\n" Nov 18 06:47:36.468: INFO: stdout: "" Nov 18 06:47:36.468: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-828 execpodhnfsk -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.18 32570' Nov 18 06:47:38.103: INFO: stderr: "I1118 06:47:37.956703 767 log.go:181] (0x4000879130) (0x4000d985a0) Create stream\nI1118 06:47:37.960542 767 log.go:181] (0x4000879130) (0x4000d985a0) Stream added, broadcasting: 1\nI1118 06:47:37.982653 767 log.go:181] (0x4000879130) Reply frame received for 1\nI1118 06:47:37.983589 767 log.go:181] (0x4000879130) (0x4000ea2000) Create stream\nI1118 06:47:37.983716 767 log.go:181] (0x4000879130) (0x4000ea2000) Stream added, broadcasting: 3\nI1118 06:47:37.985697 767 log.go:181] (0x4000879130) Reply frame received for 3\nI1118 06:47:37.986172 767 log.go:181] (0x4000879130) (0x4000d00000) Create stream\nI1118 06:47:37.986282 767 log.go:181] (0x4000879130) (0x4000d00000) Stream added, broadcasting: 5\nI1118 06:47:37.987747 767 log.go:181] (0x4000879130) Reply frame received for 5\nI1118 06:47:38.079260 767 log.go:181] (0x4000879130) Data frame received for 3\nI1118 06:47:38.079562 767 log.go:181] (0x4000879130) Data frame received for 1\nI1118 06:47:38.080053 767 log.go:181] (0x4000ea2000) (3) Data frame handling\nI1118 06:47:38.080637 767 log.go:181] (0x4000d985a0) (1) Data frame handling\nI1118 06:47:38.080741 767 log.go:181] (0x4000879130) Data frame received for 5\nI1118 06:47:38.080958 767 log.go:181] (0x4000d00000) (5) Data frame handling\nI1118 06:47:38.082610 767 log.go:181] (0x4000d985a0) (1) Data frame sent\nI1118 06:47:38.082827 767 log.go:181] (0x4000d00000) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.18 32570\nConnection to 172.18.0.18 32570 port [tcp/32570] succeeded!\nI1118 06:47:38.083451 767 log.go:181] (0x4000879130) Data frame received for 5\nI1118 06:47:38.083542 767 log.go:181] (0x4000d00000) (5) Data frame handling\nI1118 06:47:38.084006 767 log.go:181] (0x4000879130) (0x4000d985a0) Stream removed, broadcasting: 1\nI1118 06:47:38.086113 767 log.go:181] (0x4000879130) Go away received\nI1118 06:47:38.090519 767 log.go:181] Streams opened: 2, map[spdy.StreamId]*spdystream.Stream{0x3:(*spdystream.Stream)(0x4000ea2000), 0x5:(*spdystream.Stream)(0x4000d00000)}\nI1118 06:47:38.091397 767 log.go:181] (0x4000879130) (0x4000d985a0) Stream removed, broadcasting: 1\nI1118 06:47:38.091933 767 log.go:181] (0x4000879130) (0x4000ea2000) Stream removed, broadcasting: 3\nI1118 06:47:38.092260 767 log.go:181] (0x4000879130) (0x4000d00000) Stream removed, broadcasting: 5\n" Nov 18 06:47:38.104: INFO: stdout: "" Nov 18 06:47:38.105: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-828 execpodhnfsk -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.17 32570' Nov 18 06:47:39.719: INFO: stderr: "I1118 06:47:39.583823 787 log.go:181] (0x40003c5340) (0x40003b26e0) Create stream\nI1118 06:47:39.588572 787 log.go:181] (0x40003c5340) (0x40003b26e0) Stream added, broadcasting: 1\nI1118 06:47:39.606111 787 log.go:181] (0x40003c5340) Reply frame received for 1\nI1118 06:47:39.606644 787 log.go:181] (0x40003c5340) (0x40006ac000) Create stream\nI1118 06:47:39.606699 787 log.go:181] (0x40003c5340) (0x40006ac000) Stream added, broadcasting: 3\nI1118 06:47:39.607864 787 log.go:181] (0x40003c5340) Reply frame received for 3\nI1118 06:47:39.608125 787 log.go:181] (0x40003c5340) (0x400099a000) Create stream\nI1118 06:47:39.608186 787 log.go:181] (0x40003c5340) (0x400099a000) Stream added, broadcasting: 5\nI1118 06:47:39.609581 787 log.go:181] (0x40003c5340) Reply frame received for 5\nI1118 06:47:39.695630 787 log.go:181] (0x40003c5340) Data frame received for 5\nI1118 06:47:39.696284 787 log.go:181] (0x40003c5340) Data frame received for 3\nI1118 06:47:39.696699 787 log.go:181] (0x40006ac000) (3) Data frame handling\nI1118 06:47:39.696984 787 log.go:181] (0x400099a000) (5) Data frame handling\nI1118 06:47:39.697284 787 log.go:181] (0x40003c5340) Data frame received for 1\nI1118 06:47:39.697479 787 log.go:181] (0x40003b26e0) (1) Data frame handling\n+ nc -zv -t -w 2 172.18.0.17 32570\nConnection to 172.18.0.17 32570 port [tcp/32570] succeeded!\nI1118 06:47:39.699666 787 log.go:181] (0x40003b26e0) (1) Data frame sent\nI1118 06:47:39.699884 787 log.go:181] (0x400099a000) (5) Data frame sent\nI1118 06:47:39.700134 787 log.go:181] (0x40003c5340) Data frame received for 5\nI1118 06:47:39.700238 787 log.go:181] (0x400099a000) (5) Data frame handling\nI1118 06:47:39.703248 787 log.go:181] (0x40003c5340) (0x40003b26e0) Stream removed, broadcasting: 1\nI1118 06:47:39.704531 787 log.go:181] (0x40003c5340) Go away received\nI1118 06:47:39.708804 787 log.go:181] (0x40003c5340) (0x40003b26e0) Stream removed, broadcasting: 1\nI1118 06:47:39.709301 787 log.go:181] (0x40003c5340) (0x40006ac000) Stream removed, broadcasting: 3\nI1118 06:47:39.709572 787 log.go:181] (0x40003c5340) (0x400099a000) Stream removed, broadcasting: 5\n" Nov 18 06:47:39.720: INFO: stdout: "" [AfterEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:47:39.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-828" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:24.991 seconds] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":303,"completed":95,"skipped":1743,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:47:39.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl logs /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1415 STEP: creating an pod Nov 18 06:47:39.835: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.20 --namespace=kubectl-6621 --restart=Never -- logs-generator --log-lines-total 100 --run-duration 20s' Nov 18 06:47:41.317: INFO: stderr: "" Nov 18 06:47:41.317: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Waiting for log generator to start. Nov 18 06:47:41.318: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Nov 18 06:47:41.320: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-6621" to be "running and ready, or succeeded" Nov 18 06:47:41.389: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 69.644116ms Nov 18 06:47:43.825: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.50475984s Nov 18 06:47:45.830: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.510534624s Nov 18 06:47:45.831: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Nov 18 06:47:45.831: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Nov 18 06:47:45.831: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6621' Nov 18 06:47:47.260: INFO: stderr: "" Nov 18 06:47:47.260: INFO: stdout: "I1118 06:47:44.278620 1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/vvht 214\nI1118 06:47:44.478808 1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/rk2g 210\nI1118 06:47:44.678785 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/kube-system/pods/h28b 299\nI1118 06:47:44.878865 1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/5r9 206\nI1118 06:47:45.078755 1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/4zn 476\nI1118 06:47:45.278831 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/ns/pods/cpp 402\nI1118 06:47:45.478774 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/kube-system/pods/54m 569\nI1118 06:47:45.678824 1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/s9f 270\nI1118 06:47:45.878778 1 logs_generator.go:76] 8 GET /api/v1/namespaces/kube-system/pods/654c 209\nI1118 06:47:46.078746 1 logs_generator.go:76] 9 POST /api/v1/namespaces/default/pods/54r 219\nI1118 06:47:46.278790 1 logs_generator.go:76] 10 GET /api/v1/namespaces/kube-system/pods/l5nn 222\nI1118 06:47:46.478762 1 logs_generator.go:76] 11 GET /api/v1/namespaces/default/pods/njww 392\nI1118 06:47:46.678776 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/default/pods/sbz8 409\nI1118 06:47:46.878771 1 logs_generator.go:76] 13 POST /api/v1/namespaces/ns/pods/5fn 489\nI1118 06:47:47.078793 1 logs_generator.go:76] 14 GET /api/v1/namespaces/default/pods/xm47 389\n" STEP: limiting log lines Nov 18 06:47:47.261: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6621 --tail=1' Nov 18 06:47:48.606: INFO: stderr: "" Nov 18 06:47:48.606: INFO: stdout: "I1118 06:47:48.478783 1 logs_generator.go:76] 21 POST /api/v1/namespaces/ns/pods/sggj 505\n" Nov 18 06:47:48.607: INFO: got output "I1118 06:47:48.478783 1 logs_generator.go:76] 21 POST /api/v1/namespaces/ns/pods/sggj 505\n" STEP: limiting log bytes Nov 18 06:47:48.607: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6621 --limit-bytes=1' Nov 18 06:47:49.983: INFO: stderr: "" Nov 18 06:47:49.983: INFO: stdout: "I" Nov 18 06:47:49.983: INFO: got output "I" STEP: exposing timestamps Nov 18 06:47:49.984: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6621 --tail=1 --timestamps' Nov 18 06:47:51.435: INFO: stderr: "" Nov 18 06:47:51.435: INFO: stdout: "2020-11-18T06:47:51.278981652Z I1118 06:47:51.278788 1 logs_generator.go:76] 35 GET /api/v1/namespaces/ns/pods/mql2 496\n" Nov 18 06:47:51.436: INFO: got output "2020-11-18T06:47:51.278981652Z I1118 06:47:51.278788 1 logs_generator.go:76] 35 GET /api/v1/namespaces/ns/pods/mql2 496\n" STEP: restricting to a time range Nov 18 06:47:53.938: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6621 --since=1s' Nov 18 06:47:55.365: INFO: stderr: "" Nov 18 06:47:55.365: INFO: stdout: "I1118 06:47:54.478764 1 logs_generator.go:76] 51 GET /api/v1/namespaces/kube-system/pods/d4pr 518\nI1118 06:47:54.678798 1 logs_generator.go:76] 52 PUT /api/v1/namespaces/kube-system/pods/7rkq 312\nI1118 06:47:54.878703 1 logs_generator.go:76] 53 GET /api/v1/namespaces/default/pods/kmt 239\nI1118 06:47:55.078755 1 logs_generator.go:76] 54 PUT /api/v1/namespaces/kube-system/pods/4mvk 590\nI1118 06:47:55.278782 1 logs_generator.go:76] 55 POST /api/v1/namespaces/kube-system/pods/pbkh 208\n" Nov 18 06:47:55.366: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6621 --since=24h' Nov 18 06:47:56.813: INFO: stderr: "" Nov 18 06:47:56.813: INFO: stdout: "I1118 06:47:44.278620 1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/vvht 214\nI1118 06:47:44.478808 1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/rk2g 210\nI1118 06:47:44.678785 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/kube-system/pods/h28b 299\nI1118 06:47:44.878865 1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/5r9 206\nI1118 06:47:45.078755 1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/4zn 476\nI1118 06:47:45.278831 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/ns/pods/cpp 402\nI1118 06:47:45.478774 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/kube-system/pods/54m 569\nI1118 06:47:45.678824 1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/s9f 270\nI1118 06:47:45.878778 1 logs_generator.go:76] 8 GET /api/v1/namespaces/kube-system/pods/654c 209\nI1118 06:47:46.078746 1 logs_generator.go:76] 9 POST /api/v1/namespaces/default/pods/54r 219\nI1118 06:47:46.278790 1 logs_generator.go:76] 10 GET /api/v1/namespaces/kube-system/pods/l5nn 222\nI1118 06:47:46.478762 1 logs_generator.go:76] 11 GET /api/v1/namespaces/default/pods/njww 392\nI1118 06:47:46.678776 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/default/pods/sbz8 409\nI1118 06:47:46.878771 1 logs_generator.go:76] 13 POST /api/v1/namespaces/ns/pods/5fn 489\nI1118 06:47:47.078793 1 logs_generator.go:76] 14 GET /api/v1/namespaces/default/pods/xm47 389\nI1118 06:47:47.278731 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/default/pods/lkk 513\nI1118 06:47:47.478795 1 logs_generator.go:76] 16 POST /api/v1/namespaces/default/pods/gzpc 252\nI1118 06:47:47.678746 1 logs_generator.go:76] 17 GET /api/v1/namespaces/ns/pods/kvsp 467\nI1118 06:47:47.878738 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/ns/pods/nbmp 513\nI1118 06:47:48.078751 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/ns/pods/hgrg 233\nI1118 06:47:48.278799 1 logs_generator.go:76] 20 POST /api/v1/namespaces/ns/pods/d4zn 434\nI1118 06:47:48.478783 1 logs_generator.go:76] 21 POST /api/v1/namespaces/ns/pods/sggj 505\nI1118 06:47:48.678755 1 logs_generator.go:76] 22 POST /api/v1/namespaces/kube-system/pods/skx 540\nI1118 06:47:48.878755 1 logs_generator.go:76] 23 GET /api/v1/namespaces/ns/pods/7p4c 543\nI1118 06:47:49.078761 1 logs_generator.go:76] 24 GET /api/v1/namespaces/default/pods/kptn 337\nI1118 06:47:49.278776 1 logs_generator.go:76] 25 PUT /api/v1/namespaces/kube-system/pods/kxl8 500\nI1118 06:47:49.478722 1 logs_generator.go:76] 26 GET /api/v1/namespaces/ns/pods/mjp 416\nI1118 06:47:49.678751 1 logs_generator.go:76] 27 GET /api/v1/namespaces/kube-system/pods/cdtt 253\nI1118 06:47:49.878754 1 logs_generator.go:76] 28 POST /api/v1/namespaces/ns/pods/sbr 383\nI1118 06:47:50.078739 1 logs_generator.go:76] 29 GET /api/v1/namespaces/default/pods/5wnc 368\nI1118 06:47:50.278723 1 logs_generator.go:76] 30 POST /api/v1/namespaces/default/pods/4xxp 321\nI1118 06:47:50.478774 1 logs_generator.go:76] 31 POST /api/v1/namespaces/kube-system/pods/kt4 229\nI1118 06:47:50.678755 1 logs_generator.go:76] 32 GET /api/v1/namespaces/default/pods/d4bw 286\nI1118 06:47:50.878776 1 logs_generator.go:76] 33 POST /api/v1/namespaces/kube-system/pods/b9zc 363\nI1118 06:47:51.078723 1 logs_generator.go:76] 34 PUT /api/v1/namespaces/kube-system/pods/h87 423\nI1118 06:47:51.278788 1 logs_generator.go:76] 35 GET /api/v1/namespaces/ns/pods/mql2 496\nI1118 06:47:51.478756 1 logs_generator.go:76] 36 GET /api/v1/namespaces/kube-system/pods/tgc 364\nI1118 06:47:51.678792 1 logs_generator.go:76] 37 GET /api/v1/namespaces/default/pods/77gv 310\nI1118 06:47:51.878789 1 logs_generator.go:76] 38 GET /api/v1/namespaces/ns/pods/w4d 327\nI1118 06:47:52.078806 1 logs_generator.go:76] 39 POST /api/v1/namespaces/kube-system/pods/ljtn 393\nI1118 06:47:52.278702 1 logs_generator.go:76] 40 POST /api/v1/namespaces/ns/pods/xbk 344\nI1118 06:47:52.478800 1 logs_generator.go:76] 41 POST /api/v1/namespaces/kube-system/pods/5fgv 475\nI1118 06:47:52.678805 1 logs_generator.go:76] 42 PUT /api/v1/namespaces/ns/pods/2hpc 325\nI1118 06:47:52.878792 1 logs_generator.go:76] 43 PUT /api/v1/namespaces/kube-system/pods/tmqw 423\nI1118 06:47:53.078832 1 logs_generator.go:76] 44 GET /api/v1/namespaces/ns/pods/zxv 524\nI1118 06:47:53.278761 1 logs_generator.go:76] 45 GET /api/v1/namespaces/default/pods/wfmg 250\nI1118 06:47:53.478802 1 logs_generator.go:76] 46 PUT /api/v1/namespaces/default/pods/ncv6 550\nI1118 06:47:53.678740 1 logs_generator.go:76] 47 GET /api/v1/namespaces/default/pods/lqt4 276\nI1118 06:47:53.878814 1 logs_generator.go:76] 48 GET /api/v1/namespaces/ns/pods/cz9k 288\nI1118 06:47:54.078795 1 logs_generator.go:76] 49 PUT /api/v1/namespaces/default/pods/dl7 510\nI1118 06:47:54.278745 1 logs_generator.go:76] 50 PUT /api/v1/namespaces/ns/pods/fwl 225\nI1118 06:47:54.478764 1 logs_generator.go:76] 51 GET /api/v1/namespaces/kube-system/pods/d4pr 518\nI1118 06:47:54.678798 1 logs_generator.go:76] 52 PUT /api/v1/namespaces/kube-system/pods/7rkq 312\nI1118 06:47:54.878703 1 logs_generator.go:76] 53 GET /api/v1/namespaces/default/pods/kmt 239\nI1118 06:47:55.078755 1 logs_generator.go:76] 54 PUT /api/v1/namespaces/kube-system/pods/4mvk 590\nI1118 06:47:55.278782 1 logs_generator.go:76] 55 POST /api/v1/namespaces/kube-system/pods/pbkh 208\nI1118 06:47:55.478758 1 logs_generator.go:76] 56 POST /api/v1/namespaces/ns/pods/mjx 428\nI1118 06:47:55.678792 1 logs_generator.go:76] 57 GET /api/v1/namespaces/default/pods/4qs 281\nI1118 06:47:55.878756 1 logs_generator.go:76] 58 POST /api/v1/namespaces/default/pods/vh8 393\nI1118 06:47:56.078829 1 logs_generator.go:76] 59 PUT /api/v1/namespaces/default/pods/2qs8 422\nI1118 06:47:56.278788 1 logs_generator.go:76] 60 POST /api/v1/namespaces/ns/pods/whd 516\nI1118 06:47:56.478754 1 logs_generator.go:76] 61 POST /api/v1/namespaces/kube-system/pods/dct4 352\nI1118 06:47:56.678773 1 logs_generator.go:76] 62 GET /api/v1/namespaces/ns/pods/87g 491\n" [AfterEach] Kubectl logs /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1421 Nov 18 06:47:56.818: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-6621' Nov 18 06:48:10.310: INFO: stderr: "" Nov 18 06:48:10.310: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:48:10.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6621" for this suite. • [SLOW TEST:30.584 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1411 should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":303,"completed":96,"skipped":1747,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:48:10.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Nov 18 06:48:10.447: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6846 /api/v1/namespaces/watch-6846/configmaps/e2e-watch-test-label-changed 365648b2-34ed-4980-9586-769cd929e22c 11991394 0 2020-11-18 06:48:10 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-11-18 06:48:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Nov 18 06:48:10.449: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6846 /api/v1/namespaces/watch-6846/configmaps/e2e-watch-test-label-changed 365648b2-34ed-4980-9586-769cd929e22c 11991395 0 2020-11-18 06:48:10 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-11-18 06:48:10 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Nov 18 06:48:10.450: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6846 /api/v1/namespaces/watch-6846/configmaps/e2e-watch-test-label-changed 365648b2-34ed-4980-9586-769cd929e22c 11991397 0 2020-11-18 06:48:10 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-11-18 06:48:10 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Nov 18 06:48:20.564: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6846 /api/v1/namespaces/watch-6846/configmaps/e2e-watch-test-label-changed 365648b2-34ed-4980-9586-769cd929e22c 11991436 0 2020-11-18 06:48:10 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-11-18 06:48:20 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Nov 18 06:48:20.566: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6846 /api/v1/namespaces/watch-6846/configmaps/e2e-watch-test-label-changed 365648b2-34ed-4980-9586-769cd929e22c 11991437 0 2020-11-18 06:48:10 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-11-18 06:48:20 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Nov 18 06:48:20.566: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-6846 /api/v1/namespaces/watch-6846/configmaps/e2e-watch-test-label-changed 365648b2-34ed-4980-9586-769cd929e22c 11991438 0 2020-11-18 06:48:10 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-11-18 06:48:20 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:48:20.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6846" for this suite. • [SLOW TEST:10.258 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":303,"completed":97,"skipped":1757,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:48:20.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl run pod /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1545 [It] should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Nov 18 06:48:20.651: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-1773' Nov 18 06:48:22.010: INFO: stderr: "" Nov 18 06:48:22.010: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1550 Nov 18 06:48:22.017: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-1773' Nov 18 06:48:30.314: INFO: stderr: "" Nov 18 06:48:30.314: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:48:30.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1773" for this suite. • [SLOW TEST:9.745 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1541 should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":303,"completed":98,"skipped":1776,"failed":0} SSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:48:30.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 18 06:48:30.445: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-fd26b230-593c-4845-9134-81956d0408d7" in namespace "security-context-test-5493" to be "Succeeded or Failed" Nov 18 06:48:30.463: INFO: Pod "busybox-readonly-false-fd26b230-593c-4845-9134-81956d0408d7": Phase="Pending", Reason="", readiness=false. Elapsed: 17.778722ms Nov 18 06:48:32.470: INFO: Pod "busybox-readonly-false-fd26b230-593c-4845-9134-81956d0408d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025400467s Nov 18 06:48:34.477: INFO: Pod "busybox-readonly-false-fd26b230-593c-4845-9134-81956d0408d7": Phase="Running", Reason="", readiness=true. Elapsed: 4.031858043s Nov 18 06:48:36.505: INFO: Pod "busybox-readonly-false-fd26b230-593c-4845-9134-81956d0408d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.06063003s Nov 18 06:48:36.506: INFO: Pod "busybox-readonly-false-fd26b230-593c-4845-9134-81956d0408d7" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:48:36.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5493" for this suite. • [SLOW TEST:6.216 seconds] [k8s.io] Security Context /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a pod with readOnlyRootFilesystem /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:166 should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":303,"completed":99,"skipped":1783,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:48:36.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Nov 18 06:48:40.729: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:48:40.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6112" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":303,"completed":100,"skipped":1814,"failed":0} ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:48:40.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Nov 18 06:48:40.987: INFO: Waiting up to 5m0s for pod "downward-api-0daf8d9d-0240-4287-8f97-af44f37a8f51" in namespace "downward-api-9885" to be "Succeeded or Failed" Nov 18 06:48:41.003: INFO: Pod "downward-api-0daf8d9d-0240-4287-8f97-af44f37a8f51": Phase="Pending", Reason="", readiness=false. Elapsed: 16.210643ms Nov 18 06:48:43.010: INFO: Pod "downward-api-0daf8d9d-0240-4287-8f97-af44f37a8f51": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022779197s Nov 18 06:48:45.017: INFO: Pod "downward-api-0daf8d9d-0240-4287-8f97-af44f37a8f51": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030715299s STEP: Saw pod success Nov 18 06:48:45.018: INFO: Pod "downward-api-0daf8d9d-0240-4287-8f97-af44f37a8f51" satisfied condition "Succeeded or Failed" Nov 18 06:48:45.023: INFO: Trying to get logs from node leguer-worker2 pod downward-api-0daf8d9d-0240-4287-8f97-af44f37a8f51 container dapi-container: STEP: delete the pod Nov 18 06:48:45.115: INFO: Waiting for pod downward-api-0daf8d9d-0240-4287-8f97-af44f37a8f51 to disappear Nov 18 06:48:45.217: INFO: Pod downward-api-0daf8d9d-0240-4287-8f97-af44f37a8f51 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:48:45.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9885" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":303,"completed":101,"skipped":1814,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:48:45.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Nov 18 06:48:45.313: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the sample API server. Nov 18 06:48:47.890: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Nov 18 06:48:50.395: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741278927, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741278927, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741278927, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741278927, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67dc674868\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 18 06:48:52.402: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741278927, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741278927, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741278927, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741278927, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67dc674868\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 18 06:48:55.049: INFO: Waited 627.254259ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:48:55.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-6997" for this suite. • [SLOW TEST:10.449 seconds] [sig-api-machinery] Aggregator /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":303,"completed":102,"skipped":1820,"failed":0} S ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:48:55.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Nov 18 06:49:04.304: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Nov 18 06:49:04.345: INFO: Pod pod-with-poststart-http-hook still exists Nov 18 06:49:06.345: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Nov 18 06:49:06.354: INFO: Pod pod-with-poststart-http-hook still exists Nov 18 06:49:08.345: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Nov 18 06:49:08.354: INFO: Pod pod-with-poststart-http-hook still exists Nov 18 06:49:10.346: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Nov 18 06:49:10.354: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:49:10.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8013" for this suite. • [SLOW TEST:14.693 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":303,"completed":103,"skipped":1821,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:49:10.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-3181 STEP: creating a selector STEP: Creating the service pods in kubernetes Nov 18 06:49:10.499: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 18 06:49:10.623: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 18 06:49:12.661: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 18 06:49:14.646: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 18 06:49:16.636: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 18 06:49:18.632: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 18 06:49:20.631: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 18 06:49:22.631: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 18 06:49:24.631: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 18 06:49:26.632: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 18 06:49:28.632: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 18 06:49:30.631: INFO: The status of Pod netserver-0 is Running (Ready = true) Nov 18 06:49:30.641: INFO: The status of Pod netserver-1 is Running (Ready = false) Nov 18 06:49:32.647: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Nov 18 06:49:36.732: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.76:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3181 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 18 06:49:36.732: INFO: >>> kubeConfig: /root/.kube/config I1118 06:49:36.799374 10 log.go:181] (0x40001b0840) (0x400408a640) Create stream I1118 06:49:36.799583 10 log.go:181] (0x40001b0840) (0x400408a640) Stream added, broadcasting: 1 I1118 06:49:36.803608 10 log.go:181] (0x40001b0840) Reply frame received for 1 I1118 06:49:36.803838 10 log.go:181] (0x40001b0840) (0x4001082dc0) Create stream I1118 06:49:36.803918 10 log.go:181] (0x40001b0840) (0x4001082dc0) Stream added, broadcasting: 3 I1118 06:49:36.805773 10 log.go:181] (0x40001b0840) Reply frame received for 3 I1118 06:49:36.805968 10 log.go:181] (0x40001b0840) (0x4001082e60) Create stream I1118 06:49:36.806095 10 log.go:181] (0x40001b0840) (0x4001082e60) Stream added, broadcasting: 5 I1118 06:49:36.807594 10 log.go:181] (0x40001b0840) Reply frame received for 5 I1118 06:49:36.889548 10 log.go:181] (0x40001b0840) Data frame received for 3 I1118 06:49:36.889845 10 log.go:181] (0x4001082dc0) (3) Data frame handling I1118 06:49:36.890132 10 log.go:181] (0x40001b0840) Data frame received for 5 I1118 06:49:36.890345 10 log.go:181] (0x4001082e60) (5) Data frame handling I1118 06:49:36.890605 10 log.go:181] (0x4001082dc0) (3) Data frame sent I1118 06:49:36.890830 10 log.go:181] (0x40001b0840) Data frame received for 3 I1118 06:49:36.890987 10 log.go:181] (0x4001082dc0) (3) Data frame handling I1118 06:49:36.891284 10 log.go:181] (0x40001b0840) Data frame received for 1 I1118 06:49:36.891415 10 log.go:181] (0x400408a640) (1) Data frame handling I1118 06:49:36.891551 10 log.go:181] (0x400408a640) (1) Data frame sent I1118 06:49:36.891687 10 log.go:181] (0x40001b0840) (0x400408a640) Stream removed, broadcasting: 1 I1118 06:49:36.891859 10 log.go:181] (0x40001b0840) Go away received I1118 06:49:36.892179 10 log.go:181] (0x40001b0840) (0x400408a640) Stream removed, broadcasting: 1 I1118 06:49:36.892284 10 log.go:181] (0x40001b0840) (0x4001082dc0) Stream removed, broadcasting: 3 I1118 06:49:36.892393 10 log.go:181] (0x40001b0840) (0x4001082e60) Stream removed, broadcasting: 5 Nov 18 06:49:36.892: INFO: Found all expected endpoints: [netserver-0] Nov 18 06:49:36.898: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.153:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3181 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 18 06:49:36.899: INFO: >>> kubeConfig: /root/.kube/config I1118 06:49:36.967931 10 log.go:181] (0x4005e5ed10) (0x4004378c80) Create stream I1118 06:49:36.968148 10 log.go:181] (0x4005e5ed10) (0x4004378c80) Stream added, broadcasting: 1 I1118 06:49:36.975428 10 log.go:181] (0x4005e5ed10) Reply frame received for 1 I1118 06:49:36.975610 10 log.go:181] (0x4005e5ed10) (0x4004378d20) Create stream I1118 06:49:36.975707 10 log.go:181] (0x4005e5ed10) (0x4004378d20) Stream added, broadcasting: 3 I1118 06:49:36.977402 10 log.go:181] (0x4005e5ed10) Reply frame received for 3 I1118 06:49:36.977589 10 log.go:181] (0x4005e5ed10) (0x4001bbe6e0) Create stream I1118 06:49:36.977687 10 log.go:181] (0x4005e5ed10) (0x4001bbe6e0) Stream added, broadcasting: 5 I1118 06:49:36.979025 10 log.go:181] (0x4005e5ed10) Reply frame received for 5 I1118 06:49:37.066415 10 log.go:181] (0x4005e5ed10) Data frame received for 3 I1118 06:49:37.066685 10 log.go:181] (0x4004378d20) (3) Data frame handling I1118 06:49:37.066891 10 log.go:181] (0x4004378d20) (3) Data frame sent I1118 06:49:37.067090 10 log.go:181] (0x4005e5ed10) Data frame received for 3 I1118 06:49:37.067302 10 log.go:181] (0x4004378d20) (3) Data frame handling I1118 06:49:37.067457 10 log.go:181] (0x4005e5ed10) Data frame received for 5 I1118 06:49:37.067589 10 log.go:181] (0x4001bbe6e0) (5) Data frame handling I1118 06:49:37.068279 10 log.go:181] (0x4005e5ed10) Data frame received for 1 I1118 06:49:37.068473 10 log.go:181] (0x4004378c80) (1) Data frame handling I1118 06:49:37.068668 10 log.go:181] (0x4004378c80) (1) Data frame sent I1118 06:49:37.069023 10 log.go:181] (0x4005e5ed10) (0x4004378c80) Stream removed, broadcasting: 1 I1118 06:49:37.069260 10 log.go:181] (0x4005e5ed10) Go away received I1118 06:49:37.070549 10 log.go:181] (0x4005e5ed10) (0x4004378c80) Stream removed, broadcasting: 1 I1118 06:49:37.070776 10 log.go:181] (0x4005e5ed10) (0x4004378d20) Stream removed, broadcasting: 3 I1118 06:49:37.071029 10 log.go:181] (0x4005e5ed10) (0x4001bbe6e0) Stream removed, broadcasting: 5 Nov 18 06:49:37.071: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:49:37.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3181" for this suite. • [SLOW TEST:26.707 seconds] [sig-network] Networking /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":104,"skipped":1847,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:49:37.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Nov 18 06:49:37.193: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e24790ef-a032-4201-bd2f-1f94aab7a6e2" in namespace "downward-api-2580" to be "Succeeded or Failed" Nov 18 06:49:37.213: INFO: Pod "downwardapi-volume-e24790ef-a032-4201-bd2f-1f94aab7a6e2": Phase="Pending", Reason="", readiness=false. Elapsed: 19.908739ms Nov 18 06:49:39.300: INFO: Pod "downwardapi-volume-e24790ef-a032-4201-bd2f-1f94aab7a6e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106426221s Nov 18 06:49:41.307: INFO: Pod "downwardapi-volume-e24790ef-a032-4201-bd2f-1f94aab7a6e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.11339473s STEP: Saw pod success Nov 18 06:49:41.307: INFO: Pod "downwardapi-volume-e24790ef-a032-4201-bd2f-1f94aab7a6e2" satisfied condition "Succeeded or Failed" Nov 18 06:49:41.341: INFO: Trying to get logs from node leguer-worker2 pod downwardapi-volume-e24790ef-a032-4201-bd2f-1f94aab7a6e2 container client-container: STEP: delete the pod Nov 18 06:49:41.409: INFO: Waiting for pod downwardapi-volume-e24790ef-a032-4201-bd2f-1f94aab7a6e2 to disappear Nov 18 06:49:41.414: INFO: Pod downwardapi-volume-e24790ef-a032-4201-bd2f-1f94aab7a6e2 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:49:41.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2580" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":105,"skipped":1862,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:49:41.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs Nov 18 06:49:41.709: INFO: Waiting up to 5m0s for pod "pod-9e8c4ac4-5611-42ed-b8b7-9493b6821c52" in namespace "emptydir-5775" to be "Succeeded or Failed" Nov 18 06:49:41.794: INFO: Pod "pod-9e8c4ac4-5611-42ed-b8b7-9493b6821c52": Phase="Pending", Reason="", readiness=false. Elapsed: 85.186497ms Nov 18 06:49:43.821: INFO: Pod "pod-9e8c4ac4-5611-42ed-b8b7-9493b6821c52": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112284699s Nov 18 06:49:45.828: INFO: Pod "pod-9e8c4ac4-5611-42ed-b8b7-9493b6821c52": Phase="Pending", Reason="", readiness=false. Elapsed: 4.119109427s Nov 18 06:49:47.834: INFO: Pod "pod-9e8c4ac4-5611-42ed-b8b7-9493b6821c52": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.12496499s STEP: Saw pod success Nov 18 06:49:47.834: INFO: Pod "pod-9e8c4ac4-5611-42ed-b8b7-9493b6821c52" satisfied condition "Succeeded or Failed" Nov 18 06:49:47.838: INFO: Trying to get logs from node leguer-worker pod pod-9e8c4ac4-5611-42ed-b8b7-9493b6821c52 container test-container: STEP: delete the pod Nov 18 06:49:47.859: INFO: Waiting for pod pod-9e8c4ac4-5611-42ed-b8b7-9493b6821c52 to disappear Nov 18 06:49:47.880: INFO: Pod pod-9e8c4ac4-5611-42ed-b8b7-9493b6821c52 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:49:47.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5775" for this suite. • [SLOW TEST:6.468 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":106,"skipped":1891,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:49:47.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4860.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4860.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4860.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4860.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Nov 18 06:49:56.104: INFO: DNS probes using dns-test-d65841d2-dfba-454d-b06f-ef98b2c17bc3 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4860.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4860.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4860.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4860.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Nov 18 06:50:04.639: INFO: File wheezy_udp@dns-test-service-3.dns-4860.svc.cluster.local from pod dns-4860/dns-test-dcc2fe66-ee4f-496a-b12e-8805fffcda5a contains 'foo.example.com. ' instead of 'bar.example.com.' Nov 18 06:50:04.644: INFO: File jessie_udp@dns-test-service-3.dns-4860.svc.cluster.local from pod dns-4860/dns-test-dcc2fe66-ee4f-496a-b12e-8805fffcda5a contains 'foo.example.com. ' instead of 'bar.example.com.' Nov 18 06:50:04.644: INFO: Lookups using dns-4860/dns-test-dcc2fe66-ee4f-496a-b12e-8805fffcda5a failed for: [wheezy_udp@dns-test-service-3.dns-4860.svc.cluster.local jessie_udp@dns-test-service-3.dns-4860.svc.cluster.local] Nov 18 06:50:09.651: INFO: File wheezy_udp@dns-test-service-3.dns-4860.svc.cluster.local from pod dns-4860/dns-test-dcc2fe66-ee4f-496a-b12e-8805fffcda5a contains 'foo.example.com. ' instead of 'bar.example.com.' Nov 18 06:50:09.656: INFO: File jessie_udp@dns-test-service-3.dns-4860.svc.cluster.local from pod dns-4860/dns-test-dcc2fe66-ee4f-496a-b12e-8805fffcda5a contains 'foo.example.com. ' instead of 'bar.example.com.' Nov 18 06:50:09.656: INFO: Lookups using dns-4860/dns-test-dcc2fe66-ee4f-496a-b12e-8805fffcda5a failed for: [wheezy_udp@dns-test-service-3.dns-4860.svc.cluster.local jessie_udp@dns-test-service-3.dns-4860.svc.cluster.local] Nov 18 06:50:15.102: INFO: File wheezy_udp@dns-test-service-3.dns-4860.svc.cluster.local from pod dns-4860/dns-test-dcc2fe66-ee4f-496a-b12e-8805fffcda5a contains 'foo.example.com. ' instead of 'bar.example.com.' Nov 18 06:50:15.179: INFO: File jessie_udp@dns-test-service-3.dns-4860.svc.cluster.local from pod dns-4860/dns-test-dcc2fe66-ee4f-496a-b12e-8805fffcda5a contains 'foo.example.com. ' instead of 'bar.example.com.' Nov 18 06:50:15.179: INFO: Lookups using dns-4860/dns-test-dcc2fe66-ee4f-496a-b12e-8805fffcda5a failed for: [wheezy_udp@dns-test-service-3.dns-4860.svc.cluster.local jessie_udp@dns-test-service-3.dns-4860.svc.cluster.local] Nov 18 06:50:19.650: INFO: File wheezy_udp@dns-test-service-3.dns-4860.svc.cluster.local from pod dns-4860/dns-test-dcc2fe66-ee4f-496a-b12e-8805fffcda5a contains 'foo.example.com. ' instead of 'bar.example.com.' Nov 18 06:50:19.655: INFO: File jessie_udp@dns-test-service-3.dns-4860.svc.cluster.local from pod dns-4860/dns-test-dcc2fe66-ee4f-496a-b12e-8805fffcda5a contains 'foo.example.com. ' instead of 'bar.example.com.' Nov 18 06:50:19.655: INFO: Lookups using dns-4860/dns-test-dcc2fe66-ee4f-496a-b12e-8805fffcda5a failed for: [wheezy_udp@dns-test-service-3.dns-4860.svc.cluster.local jessie_udp@dns-test-service-3.dns-4860.svc.cluster.local] Nov 18 06:50:24.657: INFO: DNS probes using dns-test-dcc2fe66-ee4f-496a-b12e-8805fffcda5a succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4860.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-4860.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4860.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-4860.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Nov 18 06:50:33.260: INFO: DNS probes using dns-test-d26eef57-44b9-429b-952e-4c48d8a4f078 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:50:34.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4860" for this suite. • [SLOW TEST:47.890 seconds] [sig-network] DNS /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":303,"completed":107,"skipped":1902,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:50:35.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Nov 18 06:50:36.204: INFO: >>> kubeConfig: /root/.kube/config Nov 18 06:50:57.039: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:52:11.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4719" for this suite. • [SLOW TEST:95.490 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":303,"completed":108,"skipped":1906,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:52:11.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name projected-secret-test-e5bb176e-0a1f-4578-b8f5-12fadf32ef46 STEP: Creating a pod to test consume secrets Nov 18 06:52:11.366: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-056fe233-69b9-49d8-8615-1d141f96ed6d" in namespace "projected-6798" to be "Succeeded or Failed" Nov 18 06:52:11.403: INFO: Pod "pod-projected-secrets-056fe233-69b9-49d8-8615-1d141f96ed6d": Phase="Pending", Reason="", readiness=false. Elapsed: 36.907266ms Nov 18 06:52:13.410: INFO: Pod "pod-projected-secrets-056fe233-69b9-49d8-8615-1d141f96ed6d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044230987s Nov 18 06:52:15.419: INFO: Pod "pod-projected-secrets-056fe233-69b9-49d8-8615-1d141f96ed6d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052886428s STEP: Saw pod success Nov 18 06:52:15.419: INFO: Pod "pod-projected-secrets-056fe233-69b9-49d8-8615-1d141f96ed6d" satisfied condition "Succeeded or Failed" Nov 18 06:52:15.424: INFO: Trying to get logs from node leguer-worker2 pod pod-projected-secrets-056fe233-69b9-49d8-8615-1d141f96ed6d container secret-volume-test: STEP: delete the pod Nov 18 06:52:15.498: INFO: Waiting for pod pod-projected-secrets-056fe233-69b9-49d8-8615-1d141f96ed6d to disappear Nov 18 06:52:15.511: INFO: Pod pod-projected-secrets-056fe233-69b9-49d8-8615-1d141f96ed6d no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:52:15.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6798" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":303,"completed":109,"skipped":1930,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:52:15.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:52:19.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8326" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":303,"completed":110,"skipped":1949,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:52:19.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Nov 18 06:52:20.034: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fb189208-9a20-419c-87f5-069a96efb784" in namespace "projected-8781" to be "Succeeded or Failed" Nov 18 06:52:20.091: INFO: Pod "downwardapi-volume-fb189208-9a20-419c-87f5-069a96efb784": Phase="Pending", Reason="", readiness=false. Elapsed: 56.918867ms Nov 18 06:52:22.098: INFO: Pod "downwardapi-volume-fb189208-9a20-419c-87f5-069a96efb784": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06413502s Nov 18 06:52:24.106: INFO: Pod "downwardapi-volume-fb189208-9a20-419c-87f5-069a96efb784": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.072023332s STEP: Saw pod success Nov 18 06:52:24.107: INFO: Pod "downwardapi-volume-fb189208-9a20-419c-87f5-069a96efb784" satisfied condition "Succeeded or Failed" Nov 18 06:52:24.112: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-fb189208-9a20-419c-87f5-069a96efb784 container client-container: STEP: delete the pod Nov 18 06:52:24.230: INFO: Waiting for pod downwardapi-volume-fb189208-9a20-419c-87f5-069a96efb784 to disappear Nov 18 06:52:24.263: INFO: Pod downwardapi-volume-fb189208-9a20-419c-87f5-069a96efb784 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:52:24.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8781" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":111,"skipped":1960,"failed":0} SSSSSS ------------------------------ [sig-network] IngressClass API should support creating IngressClass API operations [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] IngressClass API /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:52:24.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingressclass STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] IngressClass API /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:148 [It] should support creating IngressClass API operations [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Nov 18 06:52:24.452: INFO: starting watch STEP: patching STEP: updating Nov 18 06:52:24.485: INFO: waiting for watch events with expected annotations Nov 18 06:52:24.487: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] IngressClass API /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:52:24.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingressclass-708" for this suite. •{"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":303,"completed":112,"skipped":1966,"failed":0} SS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:52:24.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5343.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5343.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Nov 18 06:52:30.752: INFO: DNS probes using dns-5343/dns-test-e4443981-5e9f-41da-81b7-e8965c3bddcc succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:52:30.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5343" for this suite. • [SLOW TEST:6.275 seconds] [sig-network] DNS /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":303,"completed":113,"skipped":1968,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:52:30.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 18 06:52:33.205: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 18 06:52:35.228: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741279153, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741279153, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741279153, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741279152, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 18 06:52:37.237: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741279153, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741279153, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741279153, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741279152, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 18 06:52:40.273: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 18 06:52:40.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:52:41.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5210" for this suite. STEP: Destroying namespace "webhook-5210-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.793 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":303,"completed":114,"skipped":1971,"failed":0} SSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:52:41.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-2972, will wait for the garbage collector to delete the pods Nov 18 06:52:47.791: INFO: Deleting Job.batch foo took: 9.867692ms Nov 18 06:52:47.891: INFO: Terminating Job.batch foo pods took: 100.656895ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:53:30.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-2972" for this suite. • [SLOW TEST:48.692 seconds] [sig-apps] Job /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":303,"completed":115,"skipped":1974,"failed":0} S ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:53:30.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token Nov 18 06:53:30.940: INFO: created pod pod-service-account-defaultsa Nov 18 06:53:30.940: INFO: pod pod-service-account-defaultsa service account token volume mount: true Nov 18 06:53:30.978: INFO: created pod pod-service-account-mountsa Nov 18 06:53:30.978: INFO: pod pod-service-account-mountsa service account token volume mount: true Nov 18 06:53:30.995: INFO: created pod pod-service-account-nomountsa Nov 18 06:53:30.995: INFO: pod pod-service-account-nomountsa service account token volume mount: false Nov 18 06:53:31.051: INFO: created pod pod-service-account-defaultsa-mountspec Nov 18 06:53:31.051: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Nov 18 06:53:31.111: INFO: created pod pod-service-account-mountsa-mountspec Nov 18 06:53:31.111: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Nov 18 06:53:31.134: INFO: created pod pod-service-account-nomountsa-mountspec Nov 18 06:53:31.134: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Nov 18 06:53:31.169: INFO: created pod pod-service-account-defaultsa-nomountspec Nov 18 06:53:31.169: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Nov 18 06:53:31.244: INFO: created pod pod-service-account-mountsa-nomountspec Nov 18 06:53:31.244: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Nov 18 06:53:31.262: INFO: created pod pod-service-account-nomountsa-nomountspec Nov 18 06:53:31.262: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:53:31.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-868" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":303,"completed":116,"skipped":1975,"failed":0} SSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:53:31.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Nov 18 06:53:31.649: INFO: Waiting up to 1m0s for all nodes to be ready Nov 18 06:54:31.727: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create pods that use 2/3 of node resources. Nov 18 06:54:31.768: INFO: Created pod: pod0-sched-preemption-low-priority Nov 18 06:54:31.824: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a high priority pod that has same requirements as that of lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:54:55.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-1724" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:84.625 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates basic preemption works [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":303,"completed":117,"skipped":1983,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:54:56.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-649 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-649 STEP: creating replication controller externalsvc in namespace services-649 I1118 06:54:56.444483 10 runners.go:190] Created replication controller with name: externalsvc, namespace: services-649, replica count: 2 I1118 06:54:59.495901 10 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1118 06:55:02.496817 10 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Nov 18 06:55:02.656: INFO: Creating new exec pod Nov 18 06:55:06.707: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-649 execpodq8w48 -- /bin/sh -x -c nslookup clusterip-service.services-649.svc.cluster.local' Nov 18 06:55:08.370: INFO: stderr: "I1118 06:55:08.196765 1008 log.go:181] (0x4000c06000) (0x4000ae0460) Create stream\nI1118 06:55:08.203805 1008 log.go:181] (0x4000c06000) (0x4000ae0460) Stream added, broadcasting: 1\nI1118 06:55:08.221947 1008 log.go:181] (0x4000c06000) Reply frame received for 1\nI1118 06:55:08.222917 1008 log.go:181] (0x4000c06000) (0x4000ae0500) Create stream\nI1118 06:55:08.223006 1008 log.go:181] (0x4000c06000) (0x4000ae0500) Stream added, broadcasting: 3\nI1118 06:55:08.225760 1008 log.go:181] (0x4000c06000) Reply frame received for 3\nI1118 06:55:08.226347 1008 log.go:181] (0x4000c06000) (0x4000ae05a0) Create stream\nI1118 06:55:08.226460 1008 log.go:181] (0x4000c06000) (0x4000ae05a0) Stream added, broadcasting: 5\nI1118 06:55:08.227882 1008 log.go:181] (0x4000c06000) Reply frame received for 5\nI1118 06:55:08.320809 1008 log.go:181] (0x4000c06000) Data frame received for 5\nI1118 06:55:08.321332 1008 log.go:181] (0x4000ae05a0) (5) Data frame handling\nI1118 06:55:08.322264 1008 log.go:181] (0x4000ae05a0) (5) Data frame sent\n+ nslookup clusterip-service.services-649.svc.cluster.local\nI1118 06:55:08.348145 1008 log.go:181] (0x4000c06000) Data frame received for 3\nI1118 06:55:08.348285 1008 log.go:181] (0x4000ae0500) (3) Data frame handling\nI1118 06:55:08.348453 1008 log.go:181] (0x4000ae0500) (3) Data frame sent\nI1118 06:55:08.349275 1008 log.go:181] (0x4000c06000) Data frame received for 3\nI1118 06:55:08.349440 1008 log.go:181] (0x4000ae0500) (3) Data frame handling\nI1118 06:55:08.349761 1008 log.go:181] (0x4000c06000) Data frame received for 5\nI1118 06:55:08.349884 1008 log.go:181] (0x4000ae05a0) (5) Data frame handling\nI1118 06:55:08.350017 1008 log.go:181] (0x4000ae0500) (3) Data frame sent\nI1118 06:55:08.350179 1008 log.go:181] (0x4000c06000) Data frame received for 3\nI1118 06:55:08.350292 1008 log.go:181] (0x4000ae0500) (3) Data frame handling\nI1118 06:55:08.351614 1008 log.go:181] (0x4000c06000) Data frame received for 1\nI1118 06:55:08.351719 1008 log.go:181] (0x4000ae0460) (1) Data frame handling\nI1118 06:55:08.351839 1008 log.go:181] (0x4000ae0460) (1) Data frame sent\nI1118 06:55:08.353075 1008 log.go:181] (0x4000c06000) (0x4000ae0460) Stream removed, broadcasting: 1\nI1118 06:55:08.356349 1008 log.go:181] (0x4000c06000) Go away received\nI1118 06:55:08.359223 1008 log.go:181] (0x4000c06000) (0x4000ae0460) Stream removed, broadcasting: 1\nI1118 06:55:08.359858 1008 log.go:181] (0x4000c06000) (0x4000ae0500) Stream removed, broadcasting: 3\nI1118 06:55:08.360113 1008 log.go:181] (0x4000c06000) (0x4000ae05a0) Stream removed, broadcasting: 5\n" Nov 18 06:55:08.370: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-649.svc.cluster.local\tcanonical name = externalsvc.services-649.svc.cluster.local.\nName:\texternalsvc.services-649.svc.cluster.local\nAddress: 10.111.56.112\n\n" STEP: deleting ReplicationController externalsvc in namespace services-649, will wait for the garbage collector to delete the pods Nov 18 06:55:08.436: INFO: Deleting ReplicationController externalsvc took: 9.327921ms Nov 18 06:55:08.837: INFO: Terminating ReplicationController externalsvc pods took: 400.918714ms Nov 18 06:55:20.413: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:55:20.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-649" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:24.448 seconds] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":303,"completed":118,"skipped":1993,"failed":0} [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:55:20.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-3684 [It] Should recreate evicted statefulset [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-3684 STEP: Creating statefulset with conflicting port in namespace statefulset-3684 STEP: Waiting until pod test-pod will start running in namespace statefulset-3684 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-3684 Nov 18 06:55:26.966: INFO: Observed stateful pod in namespace: statefulset-3684, name: ss-0, uid: 940bd362-3f54-4997-bbb6-333cb5977ccf, status phase: Pending. Waiting for statefulset controller to delete. Nov 18 06:55:27.400: INFO: Observed stateful pod in namespace: statefulset-3684, name: ss-0, uid: 940bd362-3f54-4997-bbb6-333cb5977ccf, status phase: Failed. Waiting for statefulset controller to delete. Nov 18 06:55:27.408: INFO: Observed stateful pod in namespace: statefulset-3684, name: ss-0, uid: 940bd362-3f54-4997-bbb6-333cb5977ccf, status phase: Failed. Waiting for statefulset controller to delete. Nov 18 06:55:27.441: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-3684 STEP: Removing pod with conflicting port in namespace statefulset-3684 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-3684 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Nov 18 06:55:34.007: INFO: Deleting all statefulset in ns statefulset-3684 Nov 18 06:55:34.011: INFO: Scaling statefulset ss to 0 Nov 18 06:55:44.081: INFO: Waiting for statefulset status.replicas updated to 0 Nov 18 06:55:44.086: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:55:44.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3684" for this suite. • [SLOW TEST:23.634 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Should recreate evicted statefulset [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":303,"completed":119,"skipped":1993,"failed":0} S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:55:44.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-aa5f9b1a-44b3-471a-b132-0ad7cae539ee STEP: Creating a pod to test consume secrets Nov 18 06:55:44.233: INFO: Waiting up to 5m0s for pod "pod-secrets-49c7faa2-cec9-4500-b3f4-0389bcd2e0dd" in namespace "secrets-3071" to be "Succeeded or Failed" Nov 18 06:55:44.238: INFO: Pod "pod-secrets-49c7faa2-cec9-4500-b3f4-0389bcd2e0dd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.568336ms Nov 18 06:55:46.246: INFO: Pod "pod-secrets-49c7faa2-cec9-4500-b3f4-0389bcd2e0dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01265849s Nov 18 06:55:48.341: INFO: Pod "pod-secrets-49c7faa2-cec9-4500-b3f4-0389bcd2e0dd": Phase="Running", Reason="", readiness=true. Elapsed: 4.108028441s Nov 18 06:55:50.349: INFO: Pod "pod-secrets-49c7faa2-cec9-4500-b3f4-0389bcd2e0dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.115443225s STEP: Saw pod success Nov 18 06:55:50.349: INFO: Pod "pod-secrets-49c7faa2-cec9-4500-b3f4-0389bcd2e0dd" satisfied condition "Succeeded or Failed" Nov 18 06:55:50.361: INFO: Trying to get logs from node leguer-worker2 pod pod-secrets-49c7faa2-cec9-4500-b3f4-0389bcd2e0dd container secret-volume-test: STEP: delete the pod Nov 18 06:55:50.411: INFO: Waiting for pod pod-secrets-49c7faa2-cec9-4500-b3f4-0389bcd2e0dd to disappear Nov 18 06:55:50.480: INFO: Pod pod-secrets-49c7faa2-cec9-4500-b3f4-0389bcd2e0dd no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:55:50.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3071" for this suite. • [SLOW TEST:6.390 seconds] [sig-storage] Secrets /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":120,"skipped":1994,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:55:50.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 18 06:55:53.873: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 18 06:55:55.893: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741279353, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741279353, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741279353, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741279353, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 18 06:55:57.902: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741279353, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741279353, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741279353, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741279353, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 18 06:56:00.937: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 06:56:01.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2724" for this suite. STEP: Destroying namespace "webhook-2724-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.722 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":303,"completed":121,"skipped":2043,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 06:56:01.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod test-webserver-af3bba69-7319-4251-b3de-bb716325e643 in namespace container-probe-4940 Nov 18 06:56:05.392: INFO: Started pod test-webserver-af3bba69-7319-4251-b3de-bb716325e643 in namespace container-probe-4940 STEP: checking the pod's current state and verifying that restartCount is present Nov 18 06:56:05.397: INFO: Initial restart count of pod test-webserver-af3bba69-7319-4251-b3de-bb716325e643 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:00:06.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4940" for this suite. • [SLOW TEST:245.531 seconds] [k8s.io] Probing container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":303,"completed":122,"skipped":2054,"failed":0} SSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:00:06.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Nov 18 07:00:07.131: INFO: Waiting up to 5m0s for pod "downward-api-63e44d38-d596-4d58-9e8e-3a90c1ef3e6d" in namespace "downward-api-6908" to be "Succeeded or Failed" Nov 18 07:00:07.165: INFO: Pod "downward-api-63e44d38-d596-4d58-9e8e-3a90c1ef3e6d": Phase="Pending", Reason="", readiness=false. Elapsed: 33.398776ms Nov 18 07:00:09.422: INFO: Pod "downward-api-63e44d38-d596-4d58-9e8e-3a90c1ef3e6d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.291259438s Nov 18 07:00:11.430: INFO: Pod "downward-api-63e44d38-d596-4d58-9e8e-3a90c1ef3e6d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.29920082s STEP: Saw pod success Nov 18 07:00:11.431: INFO: Pod "downward-api-63e44d38-d596-4d58-9e8e-3a90c1ef3e6d" satisfied condition "Succeeded or Failed" Nov 18 07:00:11.435: INFO: Trying to get logs from node leguer-worker2 pod downward-api-63e44d38-d596-4d58-9e8e-3a90c1ef3e6d container dapi-container: STEP: delete the pod Nov 18 07:00:11.514: INFO: Waiting for pod downward-api-63e44d38-d596-4d58-9e8e-3a90c1ef3e6d to disappear Nov 18 07:00:11.529: INFO: Pod downward-api-63e44d38-d596-4d58-9e8e-3a90c1ef3e6d no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:00:11.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6908" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":303,"completed":123,"skipped":2061,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:00:11.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 18 07:00:11.885: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:00:16.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3817" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":303,"completed":124,"skipped":2096,"failed":0} SSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:00:16.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-c6887913-34e1-48df-b68e-7bdff5654502 STEP: Creating configMap with name cm-test-opt-upd-a554b4dd-9ea1-4b7c-a2db-ab2bf34c113f STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-c6887913-34e1-48df-b68e-7bdff5654502 STEP: Updating configmap cm-test-opt-upd-a554b4dd-9ea1-4b7c-a2db-ab2bf34c113f STEP: Creating configMap with name cm-test-opt-create-3a927aad-ebfb-4785-b093-dcb44fd55e9c STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:01:28.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5310" for this suite. • [SLOW TEST:72.725 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":125,"skipped":2102,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:01:28.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Update Demo /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:308 [It] should scale a replication controller [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller Nov 18 07:01:28.987: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7833' Nov 18 07:01:34.645: INFO: stderr: "" Nov 18 07:01:34.645: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Nov 18 07:01:34.646: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7833' Nov 18 07:01:36.150: INFO: stderr: "" Nov 18 07:01:36.150: INFO: stdout: "update-demo-nautilus-7jsck update-demo-nautilus-fklmn " Nov 18 07:01:36.151: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7jsck -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7833' Nov 18 07:01:37.554: INFO: stderr: "" Nov 18 07:01:37.554: INFO: stdout: "" Nov 18 07:01:37.554: INFO: update-demo-nautilus-7jsck is created but not running Nov 18 07:01:42.555: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7833' Nov 18 07:01:43.969: INFO: stderr: "" Nov 18 07:01:43.969: INFO: stdout: "update-demo-nautilus-7jsck update-demo-nautilus-fklmn " Nov 18 07:01:43.970: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7jsck -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7833' Nov 18 07:01:45.486: INFO: stderr: "" Nov 18 07:01:45.486: INFO: stdout: "true" Nov 18 07:01:45.486: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7jsck -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7833' Nov 18 07:01:46.891: INFO: stderr: "" Nov 18 07:01:46.891: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Nov 18 07:01:46.891: INFO: validating pod update-demo-nautilus-7jsck Nov 18 07:01:46.925: INFO: got data: { "image": "nautilus.jpg" } Nov 18 07:01:46.926: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Nov 18 07:01:46.926: INFO: update-demo-nautilus-7jsck is verified up and running Nov 18 07:01:46.927: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fklmn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7833' Nov 18 07:01:48.383: INFO: stderr: "" Nov 18 07:01:48.383: INFO: stdout: "true" Nov 18 07:01:48.383: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fklmn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7833' Nov 18 07:01:49.774: INFO: stderr: "" Nov 18 07:01:49.774: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Nov 18 07:01:49.774: INFO: validating pod update-demo-nautilus-fklmn Nov 18 07:01:49.781: INFO: got data: { "image": "nautilus.jpg" } Nov 18 07:01:49.781: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Nov 18 07:01:49.781: INFO: update-demo-nautilus-fklmn is verified up and running STEP: scaling down the replication controller Nov 18 07:01:49.794: INFO: scanned /root for discovery docs: Nov 18 07:01:49.795: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-7833' Nov 18 07:01:51.350: INFO: stderr: "" Nov 18 07:01:51.350: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Nov 18 07:01:51.350: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7833' Nov 18 07:01:52.729: INFO: stderr: "" Nov 18 07:01:52.729: INFO: stdout: "update-demo-nautilus-7jsck update-demo-nautilus-fklmn " STEP: Replicas for name=update-demo: expected=1 actual=2 Nov 18 07:01:57.730: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7833' Nov 18 07:01:59.157: INFO: stderr: "" Nov 18 07:01:59.157: INFO: stdout: "update-demo-nautilus-7jsck update-demo-nautilus-fklmn " STEP: Replicas for name=update-demo: expected=1 actual=2 Nov 18 07:02:04.158: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7833' Nov 18 07:02:05.658: INFO: stderr: "" Nov 18 07:02:05.658: INFO: stdout: "update-demo-nautilus-7jsck " Nov 18 07:02:05.659: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7jsck -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7833' Nov 18 07:02:07.041: INFO: stderr: "" Nov 18 07:02:07.041: INFO: stdout: "true" Nov 18 07:02:07.041: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7jsck -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7833' Nov 18 07:02:08.479: INFO: stderr: "" Nov 18 07:02:08.479: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Nov 18 07:02:08.479: INFO: validating pod update-demo-nautilus-7jsck Nov 18 07:02:08.485: INFO: got data: { "image": "nautilus.jpg" } Nov 18 07:02:08.485: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Nov 18 07:02:08.486: INFO: update-demo-nautilus-7jsck is verified up and running STEP: scaling up the replication controller Nov 18 07:02:08.497: INFO: scanned /root for discovery docs: Nov 18 07:02:08.497: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-7833' Nov 18 07:02:11.327: INFO: stderr: "" Nov 18 07:02:11.328: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Nov 18 07:02:11.329: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7833' Nov 18 07:02:12.881: INFO: stderr: "" Nov 18 07:02:12.881: INFO: stdout: "update-demo-nautilus-7jsck update-demo-nautilus-td7df " Nov 18 07:02:12.882: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7jsck -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7833' Nov 18 07:02:14.247: INFO: stderr: "" Nov 18 07:02:14.247: INFO: stdout: "true" Nov 18 07:02:14.247: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7jsck -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7833' Nov 18 07:02:15.629: INFO: stderr: "" Nov 18 07:02:15.629: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Nov 18 07:02:15.629: INFO: validating pod update-demo-nautilus-7jsck Nov 18 07:02:15.633: INFO: got data: { "image": "nautilus.jpg" } Nov 18 07:02:15.633: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Nov 18 07:02:15.633: INFO: update-demo-nautilus-7jsck is verified up and running Nov 18 07:02:15.633: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-td7df -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7833' Nov 18 07:02:17.015: INFO: stderr: "" Nov 18 07:02:17.015: INFO: stdout: "true" Nov 18 07:02:17.016: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-td7df -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7833' Nov 18 07:02:18.556: INFO: stderr: "" Nov 18 07:02:18.556: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Nov 18 07:02:18.556: INFO: validating pod update-demo-nautilus-td7df Nov 18 07:02:18.562: INFO: got data: { "image": "nautilus.jpg" } Nov 18 07:02:18.563: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Nov 18 07:02:18.563: INFO: update-demo-nautilus-td7df is verified up and running STEP: using delete to clean up resources Nov 18 07:02:18.563: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7833' Nov 18 07:02:19.967: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Nov 18 07:02:19.967: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Nov 18 07:02:19.967: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7833' Nov 18 07:02:21.434: INFO: stderr: "No resources found in kubectl-7833 namespace.\n" Nov 18 07:02:21.435: INFO: stdout: "" Nov 18 07:02:21.435: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7833 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Nov 18 07:02:22.828: INFO: stderr: "" Nov 18 07:02:22.829: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:02:22.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7833" for this suite. • [SLOW TEST:53.965 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:306 should scale a replication controller [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":303,"completed":126,"skipped":2106,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:02:22.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 18 07:02:25.402: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 18 07:02:27.421: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741279745, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741279745, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741279745, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741279745, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 18 07:02:30.465: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:02:31.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9958" for this suite. STEP: Destroying namespace "webhook-9958-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.299 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":303,"completed":127,"skipped":2111,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:02:31.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should add annotations for pods in rc [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC Nov 18 07:02:31.228: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8983' Nov 18 07:02:35.266: INFO: stderr: "" Nov 18 07:02:35.266: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Nov 18 07:02:36.273: INFO: Selector matched 1 pods for map[app:agnhost] Nov 18 07:02:36.273: INFO: Found 0 / 1 Nov 18 07:02:37.274: INFO: Selector matched 1 pods for map[app:agnhost] Nov 18 07:02:37.274: INFO: Found 0 / 1 Nov 18 07:02:38.275: INFO: Selector matched 1 pods for map[app:agnhost] Nov 18 07:02:38.276: INFO: Found 1 / 1 Nov 18 07:02:38.276: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Nov 18 07:02:38.282: INFO: Selector matched 1 pods for map[app:agnhost] Nov 18 07:02:38.282: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Nov 18 07:02:38.282: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config patch pod agnhost-primary-r2cg7 --namespace=kubectl-8983 -p {"metadata":{"annotations":{"x":"y"}}}' Nov 18 07:02:39.628: INFO: stderr: "" Nov 18 07:02:39.628: INFO: stdout: "pod/agnhost-primary-r2cg7 patched\n" STEP: checking annotations Nov 18 07:02:39.634: INFO: Selector matched 1 pods for map[app:agnhost] Nov 18 07:02:39.635: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:02:39.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8983" for this suite. • [SLOW TEST:8.474 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1490 should add annotations for pods in rc [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":303,"completed":128,"skipped":2113,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Discovery /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:02:39.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename discovery STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Discovery /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:39 STEP: Setting up server cert [It] should validate PreferredVersion for each APIGroup [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 18 07:02:41.799: INFO: Checking APIGroup: apiregistration.k8s.io Nov 18 07:02:41.802: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 Nov 18 07:02:41.802: INFO: Versions found [{apiregistration.k8s.io/v1 v1} {apiregistration.k8s.io/v1beta1 v1beta1}] Nov 18 07:02:41.802: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 Nov 18 07:02:41.802: INFO: Checking APIGroup: extensions Nov 18 07:02:41.805: INFO: PreferredVersion.GroupVersion: extensions/v1beta1 Nov 18 07:02:41.805: INFO: Versions found [{extensions/v1beta1 v1beta1}] Nov 18 07:02:41.805: INFO: extensions/v1beta1 matches extensions/v1beta1 Nov 18 07:02:41.805: INFO: Checking APIGroup: apps Nov 18 07:02:41.807: INFO: PreferredVersion.GroupVersion: apps/v1 Nov 18 07:02:41.807: INFO: Versions found [{apps/v1 v1}] Nov 18 07:02:41.807: INFO: apps/v1 matches apps/v1 Nov 18 07:02:41.807: INFO: Checking APIGroup: events.k8s.io Nov 18 07:02:41.810: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 Nov 18 07:02:41.810: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] Nov 18 07:02:41.810: INFO: events.k8s.io/v1 matches events.k8s.io/v1 Nov 18 07:02:41.810: INFO: Checking APIGroup: authentication.k8s.io Nov 18 07:02:41.812: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 Nov 18 07:02:41.812: INFO: Versions found [{authentication.k8s.io/v1 v1} {authentication.k8s.io/v1beta1 v1beta1}] Nov 18 07:02:41.812: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 Nov 18 07:02:41.812: INFO: Checking APIGroup: authorization.k8s.io Nov 18 07:02:41.814: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 Nov 18 07:02:41.814: INFO: Versions found [{authorization.k8s.io/v1 v1} {authorization.k8s.io/v1beta1 v1beta1}] Nov 18 07:02:41.814: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 Nov 18 07:02:41.814: INFO: Checking APIGroup: autoscaling Nov 18 07:02:41.816: INFO: PreferredVersion.GroupVersion: autoscaling/v1 Nov 18 07:02:41.816: INFO: Versions found [{autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] Nov 18 07:02:41.816: INFO: autoscaling/v1 matches autoscaling/v1 Nov 18 07:02:41.816: INFO: Checking APIGroup: batch Nov 18 07:02:41.818: INFO: PreferredVersion.GroupVersion: batch/v1 Nov 18 07:02:41.818: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1}] Nov 18 07:02:41.818: INFO: batch/v1 matches batch/v1 Nov 18 07:02:41.818: INFO: Checking APIGroup: certificates.k8s.io Nov 18 07:02:41.821: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 Nov 18 07:02:41.821: INFO: Versions found [{certificates.k8s.io/v1 v1} {certificates.k8s.io/v1beta1 v1beta1}] Nov 18 07:02:41.821: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 Nov 18 07:02:41.821: INFO: Checking APIGroup: networking.k8s.io Nov 18 07:02:41.823: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 Nov 18 07:02:41.823: INFO: Versions found [{networking.k8s.io/v1 v1} {networking.k8s.io/v1beta1 v1beta1}] Nov 18 07:02:41.823: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 Nov 18 07:02:41.823: INFO: Checking APIGroup: policy Nov 18 07:02:41.825: INFO: PreferredVersion.GroupVersion: policy/v1beta1 Nov 18 07:02:41.825: INFO: Versions found [{policy/v1beta1 v1beta1}] Nov 18 07:02:41.825: INFO: policy/v1beta1 matches policy/v1beta1 Nov 18 07:02:41.825: INFO: Checking APIGroup: rbac.authorization.k8s.io Nov 18 07:02:41.827: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 Nov 18 07:02:41.827: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1} {rbac.authorization.k8s.io/v1beta1 v1beta1}] Nov 18 07:02:41.827: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 Nov 18 07:02:41.827: INFO: Checking APIGroup: storage.k8s.io Nov 18 07:02:41.828: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 Nov 18 07:02:41.828: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] Nov 18 07:02:41.828: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 Nov 18 07:02:41.828: INFO: Checking APIGroup: admissionregistration.k8s.io Nov 18 07:02:41.830: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 Nov 18 07:02:41.830: INFO: Versions found [{admissionregistration.k8s.io/v1 v1} {admissionregistration.k8s.io/v1beta1 v1beta1}] Nov 18 07:02:41.830: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 Nov 18 07:02:41.830: INFO: Checking APIGroup: apiextensions.k8s.io Nov 18 07:02:41.832: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 Nov 18 07:02:41.832: INFO: Versions found [{apiextensions.k8s.io/v1 v1} {apiextensions.k8s.io/v1beta1 v1beta1}] Nov 18 07:02:41.832: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 Nov 18 07:02:41.832: INFO: Checking APIGroup: scheduling.k8s.io Nov 18 07:02:41.834: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 Nov 18 07:02:41.834: INFO: Versions found [{scheduling.k8s.io/v1 v1} {scheduling.k8s.io/v1beta1 v1beta1}] Nov 18 07:02:41.834: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 Nov 18 07:02:41.834: INFO: Checking APIGroup: coordination.k8s.io Nov 18 07:02:41.836: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 Nov 18 07:02:41.836: INFO: Versions found [{coordination.k8s.io/v1 v1} {coordination.k8s.io/v1beta1 v1beta1}] Nov 18 07:02:41.836: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 Nov 18 07:02:41.836: INFO: Checking APIGroup: node.k8s.io Nov 18 07:02:41.838: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1beta1 Nov 18 07:02:41.838: INFO: Versions found [{node.k8s.io/v1beta1 v1beta1}] Nov 18 07:02:41.838: INFO: node.k8s.io/v1beta1 matches node.k8s.io/v1beta1 Nov 18 07:02:41.838: INFO: Checking APIGroup: discovery.k8s.io Nov 18 07:02:41.839: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1beta1 Nov 18 07:02:41.839: INFO: Versions found [{discovery.k8s.io/v1beta1 v1beta1}] Nov 18 07:02:41.839: INFO: discovery.k8s.io/v1beta1 matches discovery.k8s.io/v1beta1 [AfterEach] [sig-api-machinery] Discovery /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:02:41.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "discovery-608" for this suite. •{"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":303,"completed":129,"skipped":2118,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:02:41.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Nov 18 07:02:41.981: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cd3b525c-4ae5-46c0-b94f-052dae134f93" in namespace "projected-7896" to be "Succeeded or Failed" Nov 18 07:02:42.005: INFO: Pod "downwardapi-volume-cd3b525c-4ae5-46c0-b94f-052dae134f93": Phase="Pending", Reason="", readiness=false. Elapsed: 24.249053ms Nov 18 07:02:44.017: INFO: Pod "downwardapi-volume-cd3b525c-4ae5-46c0-b94f-052dae134f93": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03631664s Nov 18 07:02:46.222: INFO: Pod "downwardapi-volume-cd3b525c-4ae5-46c0-b94f-052dae134f93": Phase="Pending", Reason="", readiness=false. Elapsed: 4.240862121s Nov 18 07:02:48.231: INFO: Pod "downwardapi-volume-cd3b525c-4ae5-46c0-b94f-052dae134f93": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.249600401s STEP: Saw pod success Nov 18 07:02:48.231: INFO: Pod "downwardapi-volume-cd3b525c-4ae5-46c0-b94f-052dae134f93" satisfied condition "Succeeded or Failed" Nov 18 07:02:48.237: INFO: Trying to get logs from node leguer-worker2 pod downwardapi-volume-cd3b525c-4ae5-46c0-b94f-052dae134f93 container client-container: STEP: delete the pod Nov 18 07:02:48.314: INFO: Waiting for pod downwardapi-volume-cd3b525c-4ae5-46c0-b94f-052dae134f93 to disappear Nov 18 07:02:48.322: INFO: Pod downwardapi-volume-cd3b525c-4ae5-46c0-b94f-052dae134f93 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:02:48.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7896" for this suite. • [SLOW TEST:6.482 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":303,"completed":130,"skipped":2137,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:02:48.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-8861 STEP: creating a selector STEP: Creating the service pods in kubernetes Nov 18 07:02:48.469: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Nov 18 07:02:48.565: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 18 07:02:50.735: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Nov 18 07:02:52.574: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 18 07:02:54.574: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 18 07:02:56.574: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 18 07:02:58.574: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 18 07:03:00.574: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 18 07:03:02.574: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 18 07:03:04.574: INFO: The status of Pod netserver-0 is Running (Ready = false) Nov 18 07:03:06.574: INFO: The status of Pod netserver-0 is Running (Ready = true) Nov 18 07:03:06.585: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Nov 18 07:03:13.132: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.99:8080/dial?request=hostname&protocol=http&host=10.244.2.98&port=8080&tries=1'] Namespace:pod-network-test-8861 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 18 07:03:13.133: INFO: >>> kubeConfig: /root/.kube/config I1118 07:03:13.218148 10 log.go:181] (0x40003d2e70) (0x40031812c0) Create stream I1118 07:03:13.218367 10 log.go:181] (0x40003d2e70) (0x40031812c0) Stream added, broadcasting: 1 I1118 07:03:13.223223 10 log.go:181] (0x40003d2e70) Reply frame received for 1 I1118 07:03:13.223444 10 log.go:181] (0x40003d2e70) (0x4003181360) Create stream I1118 07:03:13.223601 10 log.go:181] (0x40003d2e70) (0x4003181360) Stream added, broadcasting: 3 I1118 07:03:13.225430 10 log.go:181] (0x40003d2e70) Reply frame received for 3 I1118 07:03:13.225599 10 log.go:181] (0x40003d2e70) (0x4003181400) Create stream I1118 07:03:13.225713 10 log.go:181] (0x40003d2e70) (0x4003181400) Stream added, broadcasting: 5 I1118 07:03:13.227400 10 log.go:181] (0x40003d2e70) Reply frame received for 5 I1118 07:03:13.325084 10 log.go:181] (0x40003d2e70) Data frame received for 3 I1118 07:03:13.325321 10 log.go:181] (0x4003181360) (3) Data frame handling I1118 07:03:13.325476 10 log.go:181] (0x40003d2e70) Data frame received for 5 I1118 07:03:13.325673 10 log.go:181] (0x4003181400) (5) Data frame handling I1118 07:03:13.325780 10 log.go:181] (0x4003181360) (3) Data frame sent I1118 07:03:13.325911 10 log.go:181] (0x40003d2e70) Data frame received for 3 I1118 07:03:13.326005 10 log.go:181] (0x4003181360) (3) Data frame handling I1118 07:03:13.327881 10 log.go:181] (0x40003d2e70) Data frame received for 1 I1118 07:03:13.327996 10 log.go:181] (0x40031812c0) (1) Data frame handling I1118 07:03:13.328132 10 log.go:181] (0x40031812c0) (1) Data frame sent I1118 07:03:13.328254 10 log.go:181] (0x40003d2e70) (0x40031812c0) Stream removed, broadcasting: 1 I1118 07:03:13.328395 10 log.go:181] (0x40003d2e70) Go away received I1118 07:03:13.328773 10 log.go:181] (0x40003d2e70) (0x40031812c0) Stream removed, broadcasting: 1 I1118 07:03:13.329001 10 log.go:181] (0x40003d2e70) (0x4003181360) Stream removed, broadcasting: 3 I1118 07:03:13.329094 10 log.go:181] (0x40003d2e70) (0x4003181400) Stream removed, broadcasting: 5 Nov 18 07:03:13.329: INFO: Waiting for responses: map[] Nov 18 07:03:13.335: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.99:8080/dial?request=hostname&protocol=http&host=10.244.1.175&port=8080&tries=1'] Namespace:pod-network-test-8861 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 18 07:03:13.336: INFO: >>> kubeConfig: /root/.kube/config I1118 07:03:13.403088 10 log.go:181] (0x40001b08f0) (0x400259be00) Create stream I1118 07:03:13.403315 10 log.go:181] (0x40001b08f0) (0x400259be00) Stream added, broadcasting: 1 I1118 07:03:13.407415 10 log.go:181] (0x40001b08f0) Reply frame received for 1 I1118 07:03:13.407609 10 log.go:181] (0x40001b08f0) (0x400259bea0) Create stream I1118 07:03:13.407713 10 log.go:181] (0x40001b08f0) (0x400259bea0) Stream added, broadcasting: 3 I1118 07:03:13.409816 10 log.go:181] (0x40001b08f0) Reply frame received for 3 I1118 07:03:13.410115 10 log.go:181] (0x40001b08f0) (0x40025110e0) Create stream I1118 07:03:13.410239 10 log.go:181] (0x40001b08f0) (0x40025110e0) Stream added, broadcasting: 5 I1118 07:03:13.412055 10 log.go:181] (0x40001b08f0) Reply frame received for 5 I1118 07:03:13.487929 10 log.go:181] (0x40001b08f0) Data frame received for 3 I1118 07:03:13.488103 10 log.go:181] (0x400259bea0) (3) Data frame handling I1118 07:03:13.488247 10 log.go:181] (0x400259bea0) (3) Data frame sent I1118 07:03:13.488368 10 log.go:181] (0x40001b08f0) Data frame received for 3 I1118 07:03:13.488465 10 log.go:181] (0x400259bea0) (3) Data frame handling I1118 07:03:13.488595 10 log.go:181] (0x40001b08f0) Data frame received for 5 I1118 07:03:13.488749 10 log.go:181] (0x40025110e0) (5) Data frame handling I1118 07:03:13.490191 10 log.go:181] (0x40001b08f0) Data frame received for 1 I1118 07:03:13.490295 10 log.go:181] (0x400259be00) (1) Data frame handling I1118 07:03:13.490447 10 log.go:181] (0x400259be00) (1) Data frame sent I1118 07:03:13.490578 10 log.go:181] (0x40001b08f0) (0x400259be00) Stream removed, broadcasting: 1 I1118 07:03:13.490720 10 log.go:181] (0x40001b08f0) Go away received I1118 07:03:13.490948 10 log.go:181] (0x40001b08f0) (0x400259be00) Stream removed, broadcasting: 1 I1118 07:03:13.491031 10 log.go:181] (0x40001b08f0) (0x400259bea0) Stream removed, broadcasting: 3 I1118 07:03:13.491103 10 log.go:181] (0x40001b08f0) (0x40025110e0) Stream removed, broadcasting: 5 Nov 18 07:03:13.491: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:03:13.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8861" for this suite. • [SLOW TEST:25.171 seconds] [sig-network] Networking /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":303,"completed":131,"skipped":2146,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:03:13.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 18 07:03:13.599: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:03:14.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4269" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":303,"completed":132,"skipped":2165,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:03:14.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] deployment should support rollover [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 18 07:03:14.752: INFO: Pod name rollover-pod: Found 0 pods out of 1 Nov 18 07:03:19.821: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Nov 18 07:03:19.821: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Nov 18 07:03:21.830: INFO: Creating deployment "test-rollover-deployment" Nov 18 07:03:21.844: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Nov 18 07:03:23.856: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Nov 18 07:03:23.868: INFO: Ensure that both replica sets have 1 created replica Nov 18 07:03:23.878: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Nov 18 07:03:23.890: INFO: Updating deployment test-rollover-deployment Nov 18 07:03:23.890: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Nov 18 07:03:26.144: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Nov 18 07:03:26.157: INFO: Make sure deployment "test-rollover-deployment" is complete Nov 18 07:03:26.170: INFO: all replica sets need to contain the pod-template-hash label Nov 18 07:03:26.171: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741279801, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741279801, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741279805, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741279801, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 18 07:03:28.189: INFO: all replica sets need to contain the pod-template-hash label Nov 18 07:03:28.190: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741279801, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741279801, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741279807, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741279801, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 18 07:03:30.185: INFO: all replica sets need to contain the pod-template-hash label Nov 18 07:03:30.186: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741279801, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741279801, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741279807, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741279801, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 18 07:03:32.189: INFO: all replica sets need to contain the pod-template-hash label Nov 18 07:03:32.190: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741279801, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741279801, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741279807, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741279801, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 18 07:03:34.571: INFO: all replica sets need to contain the pod-template-hash label Nov 18 07:03:34.571: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741279801, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741279801, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741279807, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741279801, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 18 07:03:36.189: INFO: all replica sets need to contain the pod-template-hash label Nov 18 07:03:36.189: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741279801, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741279801, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741279807, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741279801, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 18 07:03:38.187: INFO: Nov 18 07:03:38.187: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Nov 18 07:03:38.437: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-7464 /apis/apps/v1/namespaces/deployment-7464/deployments/test-rollover-deployment 8e1382ef-d444-4b71-9019-ab2371dee37f 11995838 2 2020-11-18 07:03:21 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-11-18 07:03:23 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-11-18 07:03:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x40025d19a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-11-18 07:03:21 +0000 UTC,LastTransitionTime:2020-11-18 07:03:21 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-5797c7764" has successfully progressed.,LastUpdateTime:2020-11-18 07:03:38 +0000 UTC,LastTransitionTime:2020-11-18 07:03:21 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Nov 18 07:03:38.444: INFO: New ReplicaSet "test-rollover-deployment-5797c7764" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-5797c7764 deployment-7464 /apis/apps/v1/namespaces/deployment-7464/replicasets/test-rollover-deployment-5797c7764 f900efa2-69e9-47ec-ace6-ea1c61514ab1 11995827 2 2020-11-18 07:03:23 +0000 UTC map[name:rollover-pod pod-template-hash:5797c7764] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 8e1382ef-d444-4b71-9019-ab2371dee37f 0x4004695790 0x4004695791}] [] [{kube-controller-manager Update apps/v1 2020-11-18 07:03:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e1382ef-d444-4b71-9019-ab2371dee37f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5797c7764,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:5797c7764] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x4004695808 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Nov 18 07:03:38.444: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Nov 18 07:03:38.445: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-7464 /apis/apps/v1/namespaces/deployment-7464/replicasets/test-rollover-controller 48455b3e-8771-4d3a-bdda-1e8c7c515a55 11995837 2 2020-11-18 07:03:14 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 8e1382ef-d444-4b71-9019-ab2371dee37f 0x400469567f 0x4004695690}] [] [{e2e.test Update apps/v1 2020-11-18 07:03:14 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-11-18 07:03:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e1382ef-d444-4b71-9019-ab2371dee37f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0x4004695728 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Nov 18 07:03:38.445: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-78bc8b888c deployment-7464 /apis/apps/v1/namespaces/deployment-7464/replicasets/test-rollover-deployment-78bc8b888c ccac2389-8fc6-46ba-b176-462370f9f92d 11995778 2 2020-11-18 07:03:21 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 8e1382ef-d444-4b71-9019-ab2371dee37f 0x4004695877 0x4004695878}] [] [{kube-controller-manager Update apps/v1 2020-11-18 07:03:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8e1382ef-d444-4b71-9019-ab2371dee37f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78bc8b888c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x4004695908 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Nov 18 07:03:38.451: INFO: Pod "test-rollover-deployment-5797c7764-f8njp" is available: &Pod{ObjectMeta:{test-rollover-deployment-5797c7764-f8njp test-rollover-deployment-5797c7764- deployment-7464 /api/v1/namespaces/deployment-7464/pods/test-rollover-deployment-5797c7764-f8njp 6f636647-2586-44aa-994a-ede3e58aa134 11995791 0 2020-11-18 07:03:24 +0000 UTC map[name:rollover-pod pod-template-hash:5797c7764] map[] [{apps/v1 ReplicaSet test-rollover-deployment-5797c7764 f900efa2-69e9-47ec-ace6-ea1c61514ab1 0x4004695e80 0x4004695e81}] [] [{kube-controller-manager Update v1 2020-11-18 07:03:24 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f900efa2-69e9-47ec-ace6-ea1c61514ab1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-11-18 07:03:27 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.100\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-28flq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-28flq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-28flq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 07:03:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 07:03:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 07:03:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 07:03:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.18,PodIP:10.244.2.100,StartTime:2020-11-18 07:03:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-11-18 07:03:27 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://110e787c9afd2446bdf16fb6df1be4eaa218754ebabad44803d0b7c817b27383,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.100,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:03:38.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7464" for this suite. • [SLOW TEST:23.804 seconds] [sig-apps] Deployment /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":303,"completed":133,"skipped":2174,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:03:38.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name secret-emptykey-test-cc058af5-197b-4d01-99ce-607c68998b25 [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:03:38.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2739" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":303,"completed":134,"skipped":2209,"failed":0} S ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:03:38.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-386c028c-209f-4291-b584-d19c520b8857 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:03:43.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7047" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":135,"skipped":2210,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] version v1 /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:03:43.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-kxbt7 in namespace proxy-3594 I1118 07:03:43.334623 10 runners.go:190] Created replication controller with name: proxy-service-kxbt7, namespace: proxy-3594, replica count: 1 I1118 07:03:44.385893 10 runners.go:190] proxy-service-kxbt7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1118 07:03:45.386555 10 runners.go:190] proxy-service-kxbt7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1118 07:03:46.387208 10 runners.go:190] proxy-service-kxbt7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1118 07:03:47.387923 10 runners.go:190] proxy-service-kxbt7 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 18 07:03:47.401: INFO: setup took 4.109717251s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Nov 18 07:03:47.411: INFO: (0) /api/v1/namespaces/proxy-3594/pods/proxy-service-kxbt7-h44s6:160/proxy/: foo (200; 8.744513ms) Nov 18 07:03:47.412: INFO: (0) /api/v1/namespaces/proxy-3594/pods/http:proxy-service-kxbt7-h44s6:1080/proxy/: ... (200; 8.814705ms) Nov 18 07:03:47.412: INFO: (0) /api/v1/namespaces/proxy-3594/pods/http:proxy-service-kxbt7-h44s6:162/proxy/: bar (200; 8.946328ms) Nov 18 07:03:47.416: INFO: (0) /api/v1/namespaces/proxy-3594/services/http:proxy-service-kxbt7:portname1/proxy/: foo (200; 13.828695ms) Nov 18 07:03:47.416: INFO: (0) /api/v1/namespaces/proxy-3594/pods/proxy-service-kxbt7-h44s6:162/proxy/: bar (200; 13.373336ms) Nov 18 07:03:47.417: INFO: (0) /api/v1/namespaces/proxy-3594/services/http:proxy-service-kxbt7:portname2/proxy/: bar (200; 13.888592ms) Nov 18 07:03:47.417: INFO: (0) /api/v1/namespaces/proxy-3594/pods/proxy-service-kxbt7-h44s6/proxy/: test (200; 14.275834ms) Nov 18 07:03:47.417: INFO: (0) /api/v1/namespaces/proxy-3594/pods/http:proxy-service-kxbt7-h44s6:160/proxy/: foo (200; 14.078418ms) Nov 18 07:03:47.417: INFO: (0) /api/v1/namespaces/proxy-3594/pods/proxy-service-kxbt7-h44s6:1080/proxy/: test<... (200; 14.766464ms) Nov 18 07:03:47.418: INFO: (0) /api/v1/namespaces/proxy-3594/services/proxy-service-kxbt7:portname1/proxy/: foo (200; 15.613608ms) Nov 18 07:03:47.418: INFO: (0) /api/v1/namespaces/proxy-3594/services/https:proxy-service-kxbt7:tlsportname2/proxy/: tls qux (200; 16.100684ms) Nov 18 07:03:47.418: INFO: (0) /api/v1/namespaces/proxy-3594/pods/https:proxy-service-kxbt7-h44s6:460/proxy/: tls baz (200; 15.960346ms) Nov 18 07:03:47.419: INFO: (0) /api/v1/namespaces/proxy-3594/pods/https:proxy-service-kxbt7-h44s6:462/proxy/: tls qux (200; 15.981811ms) Nov 18 07:03:47.419: INFO: (0) /api/v1/namespaces/proxy-3594/services/proxy-service-kxbt7:portname2/proxy/: bar (200; 16.329156ms) Nov 18 07:03:47.419: INFO: (0) /api/v1/namespaces/proxy-3594/services/https:proxy-service-kxbt7:tlsportname1/proxy/: tls baz (200; 16.114295ms) Nov 18 07:03:47.423: INFO: (0) /api/v1/namespaces/proxy-3594/pods/https:proxy-service-kxbt7-h44s6:443/proxy/: test (200; 4.664619ms) Nov 18 07:03:47.429: INFO: (1) /api/v1/namespaces/proxy-3594/pods/https:proxy-service-kxbt7-h44s6:462/proxy/: tls qux (200; 4.900813ms) Nov 18 07:03:47.431: INFO: (1) /api/v1/namespaces/proxy-3594/pods/proxy-service-kxbt7-h44s6:1080/proxy/: test<... (200; 7.519292ms) Nov 18 07:03:47.432: INFO: (1) /api/v1/namespaces/proxy-3594/pods/proxy-service-kxbt7-h44s6:162/proxy/: bar (200; 7.506295ms) Nov 18 07:03:47.432: INFO: (1) /api/v1/namespaces/proxy-3594/pods/http:proxy-service-kxbt7-h44s6:160/proxy/: foo (200; 7.659703ms) Nov 18 07:03:47.432: INFO: (1) /api/v1/namespaces/proxy-3594/pods/http:proxy-service-kxbt7-h44s6:162/proxy/: bar (200; 7.591422ms) Nov 18 07:03:47.432: INFO: (1) /api/v1/namespaces/proxy-3594/pods/https:proxy-service-kxbt7-h44s6:460/proxy/: tls baz (200; 8.144916ms) Nov 18 07:03:47.432: INFO: (1) /api/v1/namespaces/proxy-3594/services/http:proxy-service-kxbt7:portname1/proxy/: foo (200; 8.173927ms) Nov 18 07:03:47.432: INFO: (1) /api/v1/namespaces/proxy-3594/services/https:proxy-service-kxbt7:tlsportname1/proxy/: tls baz (200; 8.247671ms) Nov 18 07:03:47.432: INFO: (1) /api/v1/namespaces/proxy-3594/pods/proxy-service-kxbt7-h44s6:160/proxy/: foo (200; 8.234901ms) Nov 18 07:03:47.432: INFO: (1) /api/v1/namespaces/proxy-3594/services/proxy-service-kxbt7:portname1/proxy/: foo (200; 8.432552ms) Nov 18 07:03:47.433: INFO: (1) /api/v1/namespaces/proxy-3594/pods/https:proxy-service-kxbt7-h44s6:443/proxy/: ... (200; 8.431945ms) Nov 18 07:03:47.433: INFO: (1) /api/v1/namespaces/proxy-3594/services/http:proxy-service-kxbt7:portname2/proxy/: bar (200; 9.037542ms) Nov 18 07:03:47.433: INFO: (1) /api/v1/namespaces/proxy-3594/services/proxy-service-kxbt7:portname2/proxy/: bar (200; 8.730462ms) Nov 18 07:03:47.433: INFO: (1) /api/v1/namespaces/proxy-3594/services/https:proxy-service-kxbt7:tlsportname2/proxy/: tls qux (200; 8.970708ms) Nov 18 07:03:47.439: INFO: (2) /api/v1/namespaces/proxy-3594/services/https:proxy-service-kxbt7:tlsportname1/proxy/: tls baz (200; 5.269982ms) Nov 18 07:03:47.439: INFO: (2) /api/v1/namespaces/proxy-3594/pods/https:proxy-service-kxbt7-h44s6:460/proxy/: tls baz (200; 5.487754ms) Nov 18 07:03:47.440: INFO: (2) /api/v1/namespaces/proxy-3594/pods/https:proxy-service-kxbt7-h44s6:462/proxy/: tls qux (200; 6.303902ms) Nov 18 07:03:47.439: INFO: (2) /api/v1/namespaces/proxy-3594/pods/proxy-service-kxbt7-h44s6/proxy/: test (200; 6.116938ms) Nov 18 07:03:47.440: INFO: (2) /api/v1/namespaces/proxy-3594/pods/proxy-service-kxbt7-h44s6:162/proxy/: bar (200; 6.260594ms) Nov 18 07:03:47.440: INFO: (2) /api/v1/namespaces/proxy-3594/pods/http:proxy-service-kxbt7-h44s6:160/proxy/: foo (200; 6.217072ms) Nov 18 07:03:47.440: INFO: (2) /api/v1/namespaces/proxy-3594/pods/proxy-service-kxbt7-h44s6:160/proxy/: foo (200; 6.642846ms) Nov 18 07:03:47.440: INFO: (2) /api/v1/namespaces/proxy-3594/pods/http:proxy-service-kxbt7-h44s6:162/proxy/: bar (200; 6.824155ms) Nov 18 07:03:47.440: INFO: (2) /api/v1/namespaces/proxy-3594/pods/http:proxy-service-kxbt7-h44s6:1080/proxy/: ... (200; 7.029861ms) Nov 18 07:03:47.441: INFO: (2) /api/v1/namespaces/proxy-3594/pods/https:proxy-service-kxbt7-h44s6:443/proxy/: test<... (200; 7.204197ms) Nov 18 07:03:47.441: INFO: (2) /api/v1/namespaces/proxy-3594/services/proxy-service-kxbt7:portname1/proxy/: foo (200; 7.036028ms) Nov 18 07:03:47.441: INFO: (2) /api/v1/namespaces/proxy-3594/services/http:proxy-service-kxbt7:portname1/proxy/: foo (200; 7.842816ms) Nov 18 07:03:47.441: INFO: (2) /api/v1/namespaces/proxy-3594/services/http:proxy-service-kxbt7:portname2/proxy/: bar (200; 7.701065ms) Nov 18 07:03:47.441: INFO: (2) /api/v1/namespaces/proxy-3594/services/https:proxy-service-kxbt7:tlsportname2/proxy/: tls qux (200; 7.421408ms) Nov 18 07:03:47.445: INFO: (3) /api/v1/namespaces/proxy-3594/pods/proxy-service-kxbt7-h44s6:162/proxy/: bar (200; 3.508909ms) Nov 18 07:03:47.446: INFO: (3) /api/v1/namespaces/proxy-3594/pods/proxy-service-kxbt7-h44s6:1080/proxy/: test<... (200; 3.779487ms) Nov 18 07:03:47.447: INFO: (3) /api/v1/namespaces/proxy-3594/pods/https:proxy-service-kxbt7-h44s6:443/proxy/: ... (200; 5.980537ms) Nov 18 07:03:47.448: INFO: (3) /api/v1/namespaces/proxy-3594/pods/https:proxy-service-kxbt7-h44s6:462/proxy/: tls qux (200; 5.987641ms) Nov 18 07:03:47.448: INFO: (3) /api/v1/namespaces/proxy-3594/pods/http:proxy-service-kxbt7-h44s6:162/proxy/: bar (200; 6.088403ms) Nov 18 07:03:47.448: INFO: (3) /api/v1/namespaces/proxy-3594/pods/proxy-service-kxbt7-h44s6/proxy/: test (200; 6.138738ms) Nov 18 07:03:47.448: INFO: (3) /api/v1/namespaces/proxy-3594/pods/http:proxy-service-kxbt7-h44s6:160/proxy/: foo (200; 6.399392ms) Nov 18 07:03:47.449: INFO: (3) /api/v1/namespaces/proxy-3594/services/http:proxy-service-kxbt7:portname2/proxy/: bar (200; 6.949283ms) Nov 18 07:03:47.449: INFO: (3) /api/v1/namespaces/proxy-3594/services/http:proxy-service-kxbt7:portname1/proxy/: foo (200; 7.644209ms) Nov 18 07:03:47.449: INFO: (3) /api/v1/namespaces/proxy-3594/pods/https:proxy-service-kxbt7-h44s6:460/proxy/: tls baz (200; 7.476299ms) Nov 18 07:03:47.449: INFO: (3) /api/v1/namespaces/proxy-3594/pods/proxy-service-kxbt7-h44s6:160/proxy/: foo (200; 7.634279ms) Nov 18 07:03:47.449: INFO: (3) /api/v1/namespaces/proxy-3594/services/proxy-service-kxbt7:portname2/proxy/: bar (200; 7.730789ms) Nov 18 07:03:47.450: INFO: (3) /api/v1/namespaces/proxy-3594/services/proxy-service-kxbt7:portname1/proxy/: foo (200; 8.030964ms) Nov 18 07:03:47.450: INFO: (3) /api/v1/namespaces/proxy-3594/services/https:proxy-service-kxbt7:tlsportname2/proxy/: tls qux (200; 7.841436ms) Nov 18 07:03:47.450: INFO: (3) /api/v1/namespaces/proxy-3594/services/https:proxy-service-kxbt7:tlsportname1/proxy/: tls baz (200; 8.244753ms) Nov 18 07:03:47.455: INFO: (4) /api/v1/namespaces/proxy-3594/pods/https:proxy-service-kxbt7-h44s6:460/proxy/: tls baz (200; 4.530481ms) Nov 18 07:03:47.455: INFO: (4) /api/v1/namespaces/proxy-3594/pods/http:proxy-service-kxbt7-h44s6:1080/proxy/: ... (200; 4.852293ms) Nov 18 07:03:47.455: INFO: (4) /api/v1/namespaces/proxy-3594/pods/proxy-service-kxbt7-h44s6/proxy/: test (200; 4.956545ms) Nov 18 07:03:47.456: INFO: (4) /api/v1/namespaces/proxy-3594/pods/proxy-service-kxbt7-h44s6:160/proxy/: foo (200; 5.793927ms) Nov 18 07:03:47.456: INFO: (4) /api/v1/namespaces/proxy-3594/pods/proxy-service-kxbt7-h44s6:1080/proxy/: test<... (200; 5.879817ms) Nov 18 07:03:47.457: INFO: (4) /api/v1/namespaces/proxy-3594/pods/http:proxy-service-kxbt7-h44s6:160/proxy/: foo (200; 6.629401ms) Nov 18 07:03:47.457: INFO: (4) /api/v1/namespaces/proxy-3594/pods/proxy-service-kxbt7-h44s6:162/proxy/: bar (200; 6.876038ms) Nov 18 07:03:47.457: INFO: (4) /api/v1/namespaces/proxy-3594/pods/https:proxy-service-kxbt7-h44s6:443/proxy/: ... (200; 12.316731ms) Nov 18 07:03:47.472: INFO: (5) /api/v1/namespaces/proxy-3594/pods/proxy-service-kxbt7-h44s6:1080/proxy/: test<... (200; 12.492268ms) Nov 18 07:03:47.472: INFO: (5) /api/v1/namespaces/proxy-3594/pods/https:proxy-service-kxbt7-h44s6:460/proxy/: tls baz (200; 12.713875ms) Nov 18 07:03:47.472: INFO: (5) /api/v1/namespaces/proxy-3594/pods/http:proxy-service-kxbt7-h44s6:162/proxy/: bar (200; 12.885338ms) Nov 18 07:03:47.473: INFO: (5) /api/v1/namespaces/proxy-3594/services/http:proxy-service-kxbt7:portname1/proxy/: foo (200; 13.269209ms) Nov 18 07:03:47.473: INFO: (5) /api/v1/namespaces/proxy-3594/pods/proxy-service-kxbt7-h44s6/proxy/: test (200; 13.212457ms) Nov 18 07:03:47.473: INFO: (5) /api/v1/namespaces/proxy-3594/pods/http:proxy-service-kxbt7-h44s6:160/proxy/: foo (200; 13.227033ms) Nov 18 07:03:47.473: INFO: (5) /api/v1/namespaces/proxy-3594/services/proxy-service-kxbt7:portname1/proxy/: foo (200; 13.565533ms) Nov 18 07:03:47.473: INFO: (5) /api/v1/namespaces/proxy-3594/services/proxy-service-kxbt7:portname2/proxy/: bar (200; 13.766503ms) Nov 18 07:03:47.474: INFO: (5) /api/v1/namespaces/proxy-3594/services/http:proxy-service-kxbt7:portname2/proxy/: bar (200; 13.905466ms) Nov 18 07:03:47.474: INFO: (5) /api/v1/namespaces/proxy-3594/services/https:proxy-service-kxbt7:tlsportname1/proxy/: tls baz (200; 14.047358ms) Nov 18 07:03:47.478: INFO: (6) /api/v1/namespaces/proxy-3594/pods/http:proxy-service-kxbt7-h44s6:1080/proxy/: ... (200; 3.872084ms) Nov 18 07:03:47.478: INFO: (6) /api/v1/namespaces/proxy-3594/pods/proxy-service-kxbt7-h44s6:1080/proxy/: test<... (200; 4.25829ms) Nov 18 07:03:47.478: INFO: (6) /api/v1/namespaces/proxy-3594/pods/http:proxy-service-kxbt7-h44s6:162/proxy/: bar (200; 4.511988ms) Nov 18 07:03:47.478: INFO: (6) /api/v1/namespaces/proxy-3594/services/https:proxy-service-kxbt7:tlsportname1/proxy/: tls baz (200; 4.640836ms) Nov 18 07:03:47.480: INFO: (6) /api/v1/namespaces/proxy-3594/pods/proxy-service-kxbt7-h44s6:162/proxy/: bar (200; 5.973221ms) Nov 18 07:03:47.480: INFO: (6) /api/v1/namespaces/proxy-3594/pods/proxy-service-kxbt7-h44s6/proxy/: test (200; 6.174685ms) Nov 18 07:03:47.480: INFO: (6) /api/v1/namespaces/proxy-3594/pods/http:proxy-service-kxbt7-h44s6:160/proxy/: foo (200; 6.431672ms) Nov 18 07:03:47.481: INFO: (6) /api/v1/namespaces/proxy-3594/services/http:proxy-service-kxbt7:portname1/proxy/: foo (200; 6.497217ms) Nov 18 07:03:47.481: INFO: (6) /api/v1/namespaces/proxy-3594/services/http:proxy-service-kxbt7:portname2/proxy/: bar (200; 6.949834ms) Nov 18 07:03:47.481: INFO: (6) /api/v1/namespaces/proxy-3594/pods/proxy-service-kxbt7-h44s6:160/proxy/: foo (200; 6.922602ms) Nov 18 07:03:47.481: INFO: (6) /api/v1/namespaces/proxy-3594/services/proxy-service-kxbt7:portname2/proxy/: bar (200; 7.117572ms) Nov 18 07:03:47.481: INFO: (6) /api/v1/namespaces/proxy-3594/pods/https:proxy-service-kxbt7-h44s6:460/proxy/: tls baz (200; 7.392257ms) Nov 18 07:03:47.481: INFO: (6) /api/v1/namespaces/proxy-3594/services/https:proxy-service-kxbt7:tlsportname2/proxy/: tls qux (200; 7.323257ms) Nov 18 07:03:47.481: INFO: (6) /api/v1/namespaces/proxy-3594/pods/https:proxy-service-kxbt7-h44s6:443/proxy/: test<... (200; 3.174793ms) Nov 18 07:03:47.486: INFO: (7) /api/v1/namespaces/proxy-3594/pods/https:proxy-service-kxbt7-h44s6:462/proxy/: tls qux (200; 4.059905ms) Nov 18 07:03:47.488: INFO: (7) /api/v1/namespaces/proxy-3594/pods/https:proxy-service-kxbt7-h44s6:443/proxy/: test (200; 4.580998ms) Nov 18 07:03:47.489: INFO: (7) /api/v1/namespaces/proxy-3594/services/proxy-service-kxbt7:portname2/proxy/: bar (200; 4.316101ms) Nov 18 07:03:47.490: INFO: (7) /api/v1/namespaces/proxy-3594/services/proxy-service-kxbt7:portname1/proxy/: foo (200; 5.294945ms) Nov 18 07:03:47.494: INFO: (7) /api/v1/namespaces/proxy-3594/pods/http:proxy-service-kxbt7-h44s6:1080/proxy/: ... (200; 9.511754ms) Nov 18 07:03:47.494: INFO: (7) /api/v1/namespaces/proxy-3594/services/https:proxy-service-kxbt7:tlsportname2/proxy/: tls qux (200; 9.789565ms) Nov 18 07:03:47.495: INFO: (7) /api/v1/namespaces/proxy-3594/services/http:proxy-service-kxbt7:portname1/proxy/: foo (200; 9.749281ms) Nov 18 07:03:47.495: INFO: (7) /api/v1/namespaces/proxy-3594/pods/http:proxy-service-kxbt7-h44s6:160/proxy/: foo (200; 9.68469ms) Nov 18 07:03:47.495: INFO: (7) /api/v1/namespaces/proxy-3594/pods/proxy-service-kxbt7-h44s6:162/proxy/: bar (200; 9.90747ms) Nov 18 07:03:47.499: INFO: (8) /api/v1/namespaces/proxy-3594/pods/proxy-service-kxbt7-h44s6:160/proxy/: foo (200; 4.436617ms) Nov 18 07:03:47.500: INFO: (8) /api/v1/namespaces/proxy-3594/pods/http:proxy-service-kxbt7-h44s6:162/proxy/: bar (200; 5.076795ms) Nov 18 07:03:47.500: INFO: (8) /api/v1/namespaces/proxy-3594/pods/proxy-service-kxbt7-h44s6:1080/proxy/: test<... (200; 4.863461ms) Nov 18 07:03:47.500: INFO: (8) /api/v1/namespaces/proxy-3594/services/proxy-service-kxbt7:portname1/proxy/: foo (200; 5.473534ms) Nov 18 07:03:47.500: INFO: (8) /api/v1/namespaces/proxy-3594/pods/proxy-service-kxbt7-h44s6/proxy/: test (200; 5.449879ms) Nov 18 07:03:47.501: INFO: (8) /api/v1/namespaces/proxy-3594/services/https:proxy-service-kxbt7:tlsportname2/proxy/: tls qux (200; 5.335271ms) Nov 18 07:03:47.501: INFO: (8) /api/v1/namespaces/proxy-3594/pods/proxy-service-kxbt7-h44s6:162/proxy/: bar (200; 5.480943ms) Nov 18 07:03:47.501: INFO: (8) /api/v1/namespaces/proxy-3594/services/http:proxy-service-kxbt7:portname1/proxy/: foo (200; 5.587916ms) Nov 18 07:03:47.501: INFO: (8) /api/v1/namespaces/proxy-3594/pods/http:proxy-service-kxbt7-h44s6:1080/proxy/: ... (200; 5.897505ms) Nov 18 07:03:47.501: INFO: (8) /api/v1/namespaces/proxy-3594/pods/http:proxy-service-kxbt7-h44s6:160/proxy/: foo (200; 5.637838ms) Nov 18 07:03:47.501: INFO: (8) /api/v1/namespaces/proxy-3594/pods/https:proxy-service-kxbt7-h44s6:462/proxy/: tls qux (200; 5.854032ms) Nov 18 07:03:47.501: INFO: (8) /api/v1/namespaces/proxy-3594/pods/https:proxy-service-kxbt7-h44s6:443/proxy/: ... (200; 5.433603ms) Nov 18 07:03:47.508: INFO: (9) /api/v1/namespaces/proxy-3594/pods/proxy-service-kxbt7-h44s6:1080/proxy/: test<... (200; 5.791835ms) Nov 18 07:03:47.508: INFO: (9) /api/v1/namespaces/proxy-3594/pods/proxy-service-kxbt7-h44s6/proxy/: test (200; 5.843451ms) Nov 18 07:03:47.508: INFO: (9) /api/v1/namespaces/proxy-3594/pods/https:proxy-service-kxbt7-h44s6:462/proxy/: tls qux (200; 5.838015ms) Nov 18 07:03:47.508: INFO: (9) /api/v1/namespaces/proxy-3594/services/http:proxy-service-kxbt7:portname2/proxy/: bar (200; 6.098053ms) Nov 18 07:03:47.508: INFO: (9) /api/v1/namespaces/proxy-3594/services/https:proxy-service-kxbt7:tlsportname1/proxy/: tls baz (200; 6.31478ms) Nov 18 07:03:47.509: INFO: (9) /api/v1/namespaces/proxy-3594/pods/proxy-service-kxbt7-h44s6:162/proxy/: bar (200; 6.184138ms) Nov 18 07:03:47.509: INFO: (9) /api/v1/namespaces/proxy-3594/pods/https:proxy-service-kxbt7-h44s6:443/proxy/: test<... (200; 3.592207ms) Nov 18 07:03:47.514: INFO: (10) /api/v1/namespaces/proxy-3594/pods/http:proxy-service-kxbt7-h44s6:162/proxy/: bar (200; 4.34371ms) Nov 18 07:03:47.514: INFO: (10) /api/v1/namespaces/proxy-3594/pods/http:proxy-service-kxbt7-h44s6:160/proxy/: foo (200; 4.233978ms) Nov 18 07:03:47.514: INFO: (10) /api/v1/namespaces/proxy-3594/pods/https:proxy-service-kxbt7-h44s6:443/proxy/: test (200; 5.174793ms) Nov 18 07:03:47.515: INFO: (10) /api/v1/namespaces/proxy-3594/pods/http:proxy-service-kxbt7-h44s6:1080/proxy/: ... (200; 5.220928ms) Nov 18 07:03:47.515: INFO: (10) /api/v1/namespaces/proxy-3594/pods/https:proxy-service-kxbt7-h44s6:460/proxy/: tls baz (200; 5.759529ms) Nov 18 07:03:47.515: INFO: (10) /api/v1/namespaces/proxy-3594/services/https:proxy-service-kxbt7:tlsportname2/proxy/: tls qux (200; 5.451701ms) Nov 18 07:03:47.516: INFO: (10) /api/v1/namespaces/proxy-3594/services/proxy-service-kxbt7:portname2/proxy/: bar (200; 6.17905ms) Nov 18 07:03:47.516: INFO: (10) /api/v1/namespaces/proxy-3594/services/http:proxy-service-kxbt7:portname1/proxy/: foo (200; 6.79454ms) Nov 18 07:03:47.516: INFO: (10) /api/v1/namespaces/proxy-3594/services/proxy-service-kxbt7:portname1/proxy/: foo (200; 6.962333ms) Nov 18 07:03:47.516: INFO: (10) /api/v1/namespaces/proxy-3594/services/http:proxy-service-kxbt7:portname2/proxy/: bar (200; 6.947956ms) Nov 18 07:03:47.516: INFO: (10) /api/v1/namespaces/proxy-3594/services/https:proxy-service-kxbt7:tlsportname1/proxy/: tls baz (200; 7.068428ms) Nov 18 07:03:47.517: INFO: (10) /api/v1/namespaces/proxy-3594/pods/proxy-service-kxbt7-h44s6:160/proxy/: foo (200; 7.05133ms) Nov 18 07:03:47.520: INFO: (11) /api/v1/namespaces/proxy-3594/pods/http:proxy-service-kxbt7-h44s6:162/proxy/: bar (200; 3.464989ms) Nov 18 07:03:47.521: INFO: (11) /api/v1/namespaces/proxy-3594/pods/https:proxy-service-kxbt7-h44s6:462/proxy/: tls qux (200; 3.778374ms) Nov 18 07:03:47.521: INFO: (11) /api/v1/namespaces/proxy-3594/pods/https:proxy-service-kxbt7-h44s6:443/proxy/: test (200; 6.065064ms) Nov 18 07:03:47.523: INFO: (11) /api/v1/namespaces/proxy-3594/services/proxy-service-kxbt7:portname2/proxy/: bar (200; 6.262343ms) Nov 18 07:03:47.524: INFO: (11) /api/v1/namespaces/proxy-3594/pods/proxy-service-kxbt7-h44s6:1080/proxy/: test<... (200; 6.759866ms) Nov 18 07:03:47.524: INFO: (11) /api/v1/namespaces/proxy-3594/services/proxy-service-kxbt7:portname1/proxy/: foo (200; 6.681457ms) Nov 18 07:03:47.524: INFO: (11) /api/v1/namespaces/proxy-3594/pods/proxy-service-kxbt7-h44s6:162/proxy/: bar (200; 6.704681ms) Nov 18 07:03:47.524: INFO: (11) /api/v1/namespaces/proxy-3594/services/https:proxy-service-kxbt7:tlsportname2/proxy/: tls qux (200; 6.770667ms) Nov 18 07:03:47.524: INFO: (11) /api/v1/namespaces/proxy-3594/pods/http:proxy-service-kxbt7-h44s6:1080/proxy/: ... (200; 6.9592ms) Nov 18 07:03:47.524: INFO: (11) /api/v1/namespaces/proxy-3594/services/http:proxy-service-kxbt7:portname2/proxy/: bar (200; 7.253667ms) Nov 18 07:03:47.528: INFO: (12) /api/v1/namespaces/proxy-3594/pods/http:proxy-service-kxbt7-h44s6:1080/proxy/: ... (200; 3.398912ms) Nov 18 07:03:47.528: INFO: (12) /api/v1/namespaces/proxy-3594/pods/proxy-service-kxbt7-h44s6:160/proxy/: foo (200; 3.875685ms) Nov 18 07:03:47.529: INFO: (12) /api/v1/namespaces/proxy-3594/pods/https:proxy-service-kxbt7-h44s6:462/proxy/: tls qux (200; 4.879316ms) Nov 18 07:03:47.529: INFO: (12) /api/v1/namespaces/proxy-3594/services/http:proxy-service-kxbt7:portname2/proxy/: bar (200; 5.083202ms) Nov 18 07:03:47.529: INFO: (12) /api/v1/namespaces/proxy-3594/pods/https:proxy-service-kxbt7-h44s6:460/proxy/: tls baz (200; 5.138384ms) Nov 18 07:03:47.530: INFO: (12) /api/v1/namespaces/proxy-3594/pods/proxy-service-kxbt7-h44s6:1080/proxy/: test<... (200; 5.629545ms) Nov 18 07:03:47.530: INFO: (12) /api/v1/namespaces/proxy-3594/pods/proxy-service-kxbt7-h44s6:162/proxy/: bar (200; 6.024987ms) Nov 18 07:03:47.530: INFO: (12) /api/v1/namespaces/proxy-3594/pods/http:proxy-service-kxbt7-h44s6:160/proxy/: foo (200; 5.976779ms) Nov 18 07:03:47.530: INFO: (12) /api/v1/namespaces/proxy-3594/pods/http:proxy-service-kxbt7-h44s6:162/proxy/: bar (200; 6.030478ms) Nov 18 07:03:47.531: INFO: (12) /api/v1/namespaces/proxy-3594/services/proxy-service-kxbt7:portname1/proxy/: foo (200; 6.365986ms) Nov 18 07:03:47.531: INFO: (12) /api/v1/namespaces/proxy-3594/pods/https:proxy-service-kxbt7-h44s6:443/proxy/: test (200; 7.301429ms) Nov 18 07:03:47.532: INFO: (12) /api/v1/namespaces/proxy-3594/services/https:proxy-service-kxbt7:tlsportname2/proxy/: tls qux (200; 7.357307ms) Nov 18 07:03:47.532: INFO: (12) /api/v1/namespaces/proxy-3594/services/http:proxy-service-kxbt7:portname1/proxy/: foo (200; 7.797399ms) Nov 18 07:03:47.532: INFO: (12) /api/v1/namespaces/proxy-3594/services/proxy-service-kxbt7:portname2/proxy/: bar (200; 7.7066ms) Nov 18 07:03:47.538: INFO: (13) /api/v1/namespaces/proxy-3594/pods/proxy-service-kxbt7-h44s6:160/proxy/: foo (200; 5.028049ms) Nov 18 07:03:47.538: INFO: (13) /api/v1/namespaces/proxy-3594/pods/proxy-service-kxbt7-h44s6:1080/proxy/: test<... (200; 5.25515ms) Nov 18 07:03:47.538: INFO: (13) /api/v1/namespaces/proxy-3594/pods/https:proxy-service-kxbt7-h44s6:462/proxy/: tls qux (200; 5.06225ms) Nov 18 07:03:47.539: INFO: (13) /api/v1/namespaces/proxy-3594/services/https:proxy-service-kxbt7:tlsportname2/proxy/: tls qux (200; 6.456241ms) Nov 18 07:03:47.539: INFO: (13) /api/v1/namespaces/proxy-3594/services/http:proxy-service-kxbt7:portname1/proxy/: foo (200; 6.198355ms) Nov 18 07:03:47.539: INFO: (13) /api/v1/namespaces/proxy-3594/pods/proxy-service-kxbt7-h44s6/proxy/: test (200; 6.526702ms) Nov 18 07:03:47.540: INFO: (13) /api/v1/namespaces/proxy-3594/pods/proxy-service-kxbt7-h44s6:162/proxy/: bar (200; 6.951047ms) Nov 18 07:03:47.540: INFO: (13) /api/v1/namespaces/proxy-3594/pods/http:proxy-service-kxbt7-h44s6:1080/proxy/: ... (200; 7.045784ms) Nov 18 07:03:47.541: INFO: (13) /api/v1/namespaces/proxy-3594/pods/https:proxy-service-kxbt7-h44s6:443/proxy/: test (200; 23.403915ms) Nov 18 07:03:47.570: INFO: (14) /api/v1/namespaces/proxy-3594/pods/http:proxy-service-kxbt7-h44s6:160/proxy/: foo (200; 23.359313ms) Nov 18 07:03:47.570: INFO: (14) /api/v1/namespaces/proxy-3594/pods/http:proxy-service-kxbt7-h44s6:1080/proxy/: ... (200; 24.646267ms) Nov 18 07:03:47.570: INFO: (14) /api/v1/namespaces/proxy-3594/services/proxy-service-kxbt7:portname2/proxy/: bar (200; 24.277856ms) Nov 18 07:03:47.570: INFO: (14) /api/v1/namespaces/proxy-3594/pods/https:proxy-service-kxbt7-h44s6:462/proxy/: tls qux (200; 24.394602ms) Nov 18 07:03:47.570: INFO: (14) /api/v1/namespaces/proxy-3594/pods/proxy-service-kxbt7-h44s6:162/proxy/: bar (200; 24.540724ms) Nov 18 07:03:47.570: INFO: (14) /api/v1/namespaces/proxy-3594/services/https:proxy-service-kxbt7:tlsportname2/proxy/: tls qux (200; 23.806964ms) Nov 18 07:03:47.570: INFO: (14) /api/v1/namespaces/proxy-3594/pods/proxy-service-kxbt7-h44s6:160/proxy/: foo (200; 24.837058ms) Nov 18 07:03:47.570: INFO: (14) /api/v1/namespaces/proxy-3594/pods/http:proxy-service-kxbt7-h44s6:162/proxy/: bar (200; 24.187753ms) Nov 18 07:03:47.570: INFO: (14) /api/v1/namespaces/proxy-3594/services/http:proxy-service-kxbt7:portname1/proxy/: foo (200; 24.268359ms) Nov 18 07:03:47.570: INFO: (14) /api/v1/namespaces/proxy-3594/services/proxy-service-kxbt7:portname1/proxy/: foo (200; 25.353359ms) Nov 18 07:03:47.571: INFO: (14) /api/v1/namespaces/proxy-3594/services/https:proxy-service-kxbt7:tlsportname1/proxy/: tls baz (200; 24.965745ms) Nov 18 07:03:47.571: INFO: (14) /api/v1/namespaces/proxy-3594/pods/proxy-service-kxbt7-h44s6:1080/proxy/: test<... (200; 24.696443ms) Nov 18 07:03:47.574: INFO: (15) /api/v1/namespaces/proxy-3594/pods/proxy-service-kxbt7-h44s6:162/proxy/: bar (200; 3.710708ms) Nov 18 07:03:47.575: INFO: (15) /api/v1/namespaces/proxy-3594/pods/proxy-service-kxbt7-h44s6/proxy/: test (200; 3.831486ms) Nov 18 07:03:47.575: INFO: (15) /api/v1/namespaces/proxy-3594/services/proxy-service-kxbt7:portname1/proxy/: foo (200; 4.25791ms) Nov 18 07:03:47.575: INFO: (15) /api/v1/namespaces/proxy-3594/pods/http:proxy-service-kxbt7-h44s6:162/proxy/: bar (200; 4.331907ms) Nov 18 07:03:47.575: INFO: (15) /api/v1/namespaces/proxy-3594/pods/http:proxy-service-kxbt7-h44s6:1080/proxy/: ... (200; 4.507776ms) Nov 18 07:03:47.576: INFO: (15) /api/v1/namespaces/proxy-3594/pods/proxy-service-kxbt7-h44s6:1080/proxy/: test<... (200; 4.720007ms) Nov 18 07:03:47.576: INFO: (15) /api/v1/namespaces/proxy-3594/services/http:proxy-service-kxbt7:portname1/proxy/: foo (200; 4.735811ms) Nov 18 07:03:47.576: INFO: (15) /api/v1/namespaces/proxy-3594/pods/https:proxy-service-kxbt7-h44s6:462/proxy/: tls qux (200; 4.953971ms) Nov 18 07:03:47.576: INFO: (15) /api/v1/namespaces/proxy-3594/pods/https:proxy-service-kxbt7-h44s6:443/proxy/: test (200; 6.787605ms) Nov 18 07:03:47.585: INFO: (16) /api/v1/namespaces/proxy-3594/services/http:proxy-service-kxbt7:portname1/proxy/: foo (200; 7.025546ms) Nov 18 07:03:47.585: INFO: (16) /api/v1/namespaces/proxy-3594/services/proxy-service-kxbt7:portname2/proxy/: bar (200; 7.090686ms) Nov 18 07:03:47.585: INFO: (16) /api/v1/namespaces/proxy-3594/services/http:proxy-service-kxbt7:portname2/proxy/: bar (200; 6.999871ms) Nov 18 07:03:47.585: INFO: (16) /api/v1/namespaces/proxy-3594/pods/http:proxy-service-kxbt7-h44s6:1080/proxy/: ... (200; 7.060933ms) Nov 18 07:03:47.585: INFO: (16) /api/v1/namespaces/proxy-3594/pods/proxy-service-kxbt7-h44s6:1080/proxy/: test<... (200; 7.267884ms) Nov 18 07:03:47.585: INFO: (16) /api/v1/namespaces/proxy-3594/services/https:proxy-service-kxbt7:tlsportname2/proxy/: tls qux (200; 7.480902ms) Nov 18 07:03:47.589: INFO: (17) /api/v1/namespaces/proxy-3594/pods/https:proxy-service-kxbt7-h44s6:443/proxy/: test (200; 5.126218ms) Nov 18 07:03:47.592: INFO: (17) /api/v1/namespaces/proxy-3594/services/proxy-service-kxbt7:portname1/proxy/: foo (200; 6.566411ms) Nov 18 07:03:47.592: INFO: (17) /api/v1/namespaces/proxy-3594/pods/proxy-service-kxbt7-h44s6:1080/proxy/: test<... (200; 6.77722ms) Nov 18 07:03:47.592: INFO: (17) /api/v1/namespaces/proxy-3594/pods/http:proxy-service-kxbt7-h44s6:162/proxy/: bar (200; 6.83025ms) Nov 18 07:03:47.592: INFO: (17) /api/v1/namespaces/proxy-3594/services/https:proxy-service-kxbt7:tlsportname1/proxy/: tls baz (200; 7.071578ms) Nov 18 07:03:47.592: INFO: (17) /api/v1/namespaces/proxy-3594/pods/proxy-service-kxbt7-h44s6:162/proxy/: bar (200; 7.210514ms) Nov 18 07:03:47.593: INFO: (17) /api/v1/namespaces/proxy-3594/pods/http:proxy-service-kxbt7-h44s6:160/proxy/: foo (200; 7.081383ms) Nov 18 07:03:47.593: INFO: (17) /api/v1/namespaces/proxy-3594/pods/https:proxy-service-kxbt7-h44s6:462/proxy/: tls qux (200; 7.146402ms) Nov 18 07:03:47.593: INFO: (17) /api/v1/namespaces/proxy-3594/pods/http:proxy-service-kxbt7-h44s6:1080/proxy/: ... (200; 7.370288ms) Nov 18 07:03:47.593: INFO: (17) /api/v1/namespaces/proxy-3594/services/proxy-service-kxbt7:portname2/proxy/: bar (200; 7.584524ms) Nov 18 07:03:47.593: INFO: (17) /api/v1/namespaces/proxy-3594/pods/proxy-service-kxbt7-h44s6:160/proxy/: foo (200; 7.502882ms) Nov 18 07:03:47.593: INFO: (17) /api/v1/namespaces/proxy-3594/services/http:proxy-service-kxbt7:portname1/proxy/: foo (200; 7.724311ms) Nov 18 07:03:47.597: INFO: (18) /api/v1/namespaces/proxy-3594/pods/proxy-service-kxbt7-h44s6:160/proxy/: foo (200; 3.864645ms) Nov 18 07:03:47.597: INFO: (18) /api/v1/namespaces/proxy-3594/pods/proxy-service-kxbt7-h44s6:1080/proxy/: test<... (200; 3.972753ms) Nov 18 07:03:47.598: INFO: (18) /api/v1/namespaces/proxy-3594/pods/http:proxy-service-kxbt7-h44s6:1080/proxy/: ... (200; 3.848536ms) Nov 18 07:03:47.598: INFO: (18) /api/v1/namespaces/proxy-3594/services/https:proxy-service-kxbt7:tlsportname2/proxy/: tls qux (200; 4.726481ms) Nov 18 07:03:47.598: INFO: (18) /api/v1/namespaces/proxy-3594/pods/https:proxy-service-kxbt7-h44s6:443/proxy/: test (200; 5.18259ms) Nov 18 07:03:47.599: INFO: (18) /api/v1/namespaces/proxy-3594/pods/proxy-service-kxbt7-h44s6:162/proxy/: bar (200; 5.843241ms) Nov 18 07:03:47.599: INFO: (18) /api/v1/namespaces/proxy-3594/services/http:proxy-service-kxbt7:portname2/proxy/: bar (200; 6.052247ms) Nov 18 07:03:47.599: INFO: (18) /api/v1/namespaces/proxy-3594/services/proxy-service-kxbt7:portname1/proxy/: foo (200; 5.842083ms) Nov 18 07:03:47.600: INFO: (18) /api/v1/namespaces/proxy-3594/services/proxy-service-kxbt7:portname2/proxy/: bar (200; 6.045178ms) Nov 18 07:03:47.600: INFO: (18) /api/v1/namespaces/proxy-3594/pods/http:proxy-service-kxbt7-h44s6:160/proxy/: foo (200; 6.115362ms) Nov 18 07:03:47.600: INFO: (18) /api/v1/namespaces/proxy-3594/services/https:proxy-service-kxbt7:tlsportname1/proxy/: tls baz (200; 6.375307ms) Nov 18 07:03:47.600: INFO: (18) /api/v1/namespaces/proxy-3594/services/http:proxy-service-kxbt7:portname1/proxy/: foo (200; 6.293454ms) Nov 18 07:03:47.603: INFO: (19) /api/v1/namespaces/proxy-3594/pods/proxy-service-kxbt7-h44s6:162/proxy/: bar (200; 3.2585ms) Nov 18 07:03:47.603: INFO: (19) /api/v1/namespaces/proxy-3594/pods/proxy-service-kxbt7-h44s6:160/proxy/: foo (200; 3.267071ms) Nov 18 07:03:47.604: INFO: (19) /api/v1/namespaces/proxy-3594/pods/https:proxy-service-kxbt7-h44s6:460/proxy/: tls baz (200; 3.356901ms) Nov 18 07:03:47.604: INFO: (19) /api/v1/namespaces/proxy-3594/pods/http:proxy-service-kxbt7-h44s6:162/proxy/: bar (200; 4.058963ms) Nov 18 07:03:47.605: INFO: (19) /api/v1/namespaces/proxy-3594/services/http:proxy-service-kxbt7:portname2/proxy/: bar (200; 4.01869ms) Nov 18 07:03:47.605: INFO: (19) /api/v1/namespaces/proxy-3594/pods/http:proxy-service-kxbt7-h44s6:1080/proxy/: ... (200; 5.02186ms) Nov 18 07:03:47.605: INFO: (19) /api/v1/namespaces/proxy-3594/services/proxy-service-kxbt7:portname2/proxy/: bar (200; 4.91809ms) Nov 18 07:03:47.605: INFO: (19) /api/v1/namespaces/proxy-3594/services/proxy-service-kxbt7:portname1/proxy/: foo (200; 5.009743ms) Nov 18 07:03:47.606: INFO: (19) /api/v1/namespaces/proxy-3594/pods/proxy-service-kxbt7-h44s6/proxy/: test (200; 5.164577ms) Nov 18 07:03:47.606: INFO: (19) /api/v1/namespaces/proxy-3594/services/https:proxy-service-kxbt7:tlsportname2/proxy/: tls qux (200; 5.492517ms) Nov 18 07:03:47.606: INFO: (19) /api/v1/namespaces/proxy-3594/pods/proxy-service-kxbt7-h44s6:1080/proxy/: test<... (200; 5.368047ms) Nov 18 07:03:47.606: INFO: (19) /api/v1/namespaces/proxy-3594/services/https:proxy-service-kxbt7:tlsportname1/proxy/: tls baz (200; 5.732718ms) Nov 18 07:03:47.606: INFO: (19) /api/v1/namespaces/proxy-3594/services/http:proxy-service-kxbt7:portname1/proxy/: foo (200; 5.701148ms) Nov 18 07:03:47.606: INFO: (19) /api/v1/namespaces/proxy-3594/pods/https:proxy-service-kxbt7-h44s6:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with configMap that has name projected-configmap-test-upd-c77ca985-34e5-4a6e-8248-224c8020a7e8 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-c77ca985-34e5-4a6e-8248-224c8020a7e8 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:04:06.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5967" for this suite. • [SLOW TEST:6.227 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":137,"skipped":2227,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:04:06.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 18 07:04:06.676: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7382' Nov 18 07:04:09.096: INFO: stderr: "" Nov 18 07:04:09.096: INFO: stdout: "replicationcontroller/agnhost-primary created\n" Nov 18 07:04:09.097: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7382' Nov 18 07:04:12.299: INFO: stderr: "" Nov 18 07:04:12.299: INFO: stdout: "service/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Nov 18 07:04:13.309: INFO: Selector matched 1 pods for map[app:agnhost] Nov 18 07:04:13.309: INFO: Found 1 / 1 Nov 18 07:04:13.309: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Nov 18 07:04:13.315: INFO: Selector matched 1 pods for map[app:agnhost] Nov 18 07:04:13.316: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Nov 18 07:04:13.316: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config describe pod agnhost-primary-hnbn6 --namespace=kubectl-7382' Nov 18 07:04:14.919: INFO: stderr: "" Nov 18 07:04:14.919: INFO: stdout: "Name: agnhost-primary-hnbn6\nNamespace: kubectl-7382\nPriority: 0\nNode: leguer-worker2/172.18.0.17\nStart Time: Wed, 18 Nov 2020 07:04:09 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: \nStatus: Running\nIP: 10.244.1.179\nIPs:\n IP: 10.244.1.179\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: containerd://a1b9ce95f0a3efb1e4cee3b75ec9d6c8128369b7fd44d4573ef6d436f72c62a7\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.20\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Wed, 18 Nov 2020 07:04:11 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-ml9m9 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-ml9m9:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-ml9m9\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 5s default-scheduler Successfully assigned kubectl-7382/agnhost-primary-hnbn6 to leguer-worker2\n Normal Pulled 4s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.20\" already present on machine\n Normal Created 3s kubelet Created container agnhost-primary\n Normal Started 2s kubelet Started container agnhost-primary\n" Nov 18 07:04:14.924: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config describe rc agnhost-primary --namespace=kubectl-7382' Nov 18 07:04:16.440: INFO: stderr: "" Nov 18 07:04:16.440: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-7382\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.20\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 7s replication-controller Created pod: agnhost-primary-hnbn6\n" Nov 18 07:04:16.442: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config describe service agnhost-primary --namespace=kubectl-7382' Nov 18 07:04:17.781: INFO: stderr: "" Nov 18 07:04:17.782: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-7382\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP: 10.106.19.26\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.179:6379\nSession Affinity: None\nEvents: \n" Nov 18 07:04:17.794: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config describe node leguer-control-plane' Nov 18 07:04:19.731: INFO: stderr: "" Nov 18 07:04:19.731: INFO: stdout: "Name: leguer-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=leguer-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 04 Oct 2020 09:51:01 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: leguer-control-plane\n AcquireTime: \n RenewTime: Wed, 18 Nov 2020 07:04:12 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Wed, 18 Nov 2020 07:03:04 +0000 Sun, 04 Oct 2020 09:50:57 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Wed, 18 Nov 2020 07:03:04 +0000 Sun, 04 Oct 2020 09:50:57 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Wed, 18 Nov 2020 07:03:04 +0000 Sun, 04 Oct 2020 09:50:57 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Wed, 18 Nov 2020 07:03:04 +0000 Sun, 04 Oct 2020 09:51:41 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.19\n Hostname: leguer-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759868Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759868Ki\n pods: 110\nSystem Info:\n Machine ID: 6326bc1b5ba447818239288d64d2cd76\n System UUID: 653741b7-2395-4557-a394-18309703661a\n Boot ID: b267d78b-f69b-4338-80e8-3f4944338e5d\n Kernel Version: 4.15.0-118-generic\n OS Image: Ubuntu Groovy Gorilla (development branch)\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.4.0\n Kubelet Version: v1.19.0\n Kube-Proxy Version: v1.19.0\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-f9fd979d6-5ftzx 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 44d\n kube-system coredns-f9fd979d6-fx25r 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 44d\n kube-system etcd-leguer-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 44d\n kube-system kindnet-sdmgv 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 44d\n kube-system kube-apiserver-leguer-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 44d\n kube-system kube-controller-manager-leguer-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 44d\n kube-system kube-proxy-x65h9 0 (0%) 0 (0%) 0 (0%) 0 (0%) 44d\n kube-system kube-scheduler-leguer-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 44d\n local-path-storage local-path-provisioner-78776bfc44-7ptcx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 44d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Nov 18 07:04:19.734: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config describe namespace kubectl-7382' Nov 18 07:04:21.165: INFO: stderr: "" Nov 18 07:04:21.165: INFO: stdout: "Name: kubectl-7382\nLabels: e2e-framework=kubectl\n e2e-run=c5925752-cfbe-4b4f-859a-1581ff40fb29\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:04:21.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7382" for this suite. • [SLOW TEST:14.565 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1105 should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":303,"completed":138,"skipped":2257,"failed":0} [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:04:21.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 18 07:04:21.276: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Nov 18 07:04:42.220: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4931 create -f -' Nov 18 07:04:48.003: INFO: stderr: "" Nov 18 07:04:48.003: INFO: stdout: "e2e-test-crd-publish-openapi-5675-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Nov 18 07:04:48.004: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4931 delete e2e-test-crd-publish-openapi-5675-crds test-foo' Nov 18 07:04:49.372: INFO: stderr: "" Nov 18 07:04:49.372: INFO: stdout: "e2e-test-crd-publish-openapi-5675-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Nov 18 07:04:49.373: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4931 apply -f -' Nov 18 07:04:52.451: INFO: stderr: "" Nov 18 07:04:52.451: INFO: stdout: "e2e-test-crd-publish-openapi-5675-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Nov 18 07:04:52.452: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4931 delete e2e-test-crd-publish-openapi-5675-crds test-foo' Nov 18 07:04:53.871: INFO: stderr: "" Nov 18 07:04:53.871: INFO: stdout: "e2e-test-crd-publish-openapi-5675-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Nov 18 07:04:53.872: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4931 create -f -' Nov 18 07:04:56.538: INFO: rc: 1 Nov 18 07:04:56.539: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4931 apply -f -' Nov 18 07:04:59.290: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Nov 18 07:04:59.291: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4931 create -f -' Nov 18 07:05:02.326: INFO: rc: 1 Nov 18 07:05:02.327: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4931 apply -f -' Nov 18 07:05:05.228: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Nov 18 07:05:05.229: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5675-crds' Nov 18 07:05:08.724: INFO: stderr: "" Nov 18 07:05:08.724: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5675-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Nov 18 07:05:08.731: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5675-crds.metadata' Nov 18 07:05:11.780: INFO: stderr: "" Nov 18 07:05:11.780: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5675-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Nov 18 07:05:11.787: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5675-crds.spec' Nov 18 07:05:14.813: INFO: stderr: "" Nov 18 07:05:14.814: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5675-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Nov 18 07:05:14.815: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5675-crds.spec.bars' Nov 18 07:05:17.031: INFO: stderr: "" Nov 18 07:05:17.031: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5675-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Nov 18 07:05:17.033: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5675-crds.spec.bars2' Nov 18 07:05:20.733: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:05:31.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4931" for this suite. • [SLOW TEST:70.414 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":303,"completed":139,"skipped":2257,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:05:31.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-ae0cb16d-77c9-4551-96e5-a69b1b394095 STEP: Creating a pod to test consume secrets Nov 18 07:05:31.725: INFO: Waiting up to 5m0s for pod "pod-secrets-c0513f15-b258-4245-affe-8b73ff41c9b1" in namespace "secrets-2889" to be "Succeeded or Failed" Nov 18 07:05:31.741: INFO: Pod "pod-secrets-c0513f15-b258-4245-affe-8b73ff41c9b1": Phase="Pending", Reason="", readiness=false. Elapsed: 15.604719ms Nov 18 07:05:33.747: INFO: Pod "pod-secrets-c0513f15-b258-4245-affe-8b73ff41c9b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021682656s Nov 18 07:05:35.753: INFO: Pod "pod-secrets-c0513f15-b258-4245-affe-8b73ff41c9b1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027818406s Nov 18 07:05:37.760: INFO: Pod "pod-secrets-c0513f15-b258-4245-affe-8b73ff41c9b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.03450354s STEP: Saw pod success Nov 18 07:05:37.760: INFO: Pod "pod-secrets-c0513f15-b258-4245-affe-8b73ff41c9b1" satisfied condition "Succeeded or Failed" Nov 18 07:05:37.764: INFO: Trying to get logs from node leguer-worker2 pod pod-secrets-c0513f15-b258-4245-affe-8b73ff41c9b1 container secret-env-test: STEP: delete the pod Nov 18 07:05:37.796: INFO: Waiting for pod pod-secrets-c0513f15-b258-4245-affe-8b73ff41c9b1 to disappear Nov 18 07:05:37.832: INFO: Pod pod-secrets-c0513f15-b258-4245-affe-8b73ff41c9b1 no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:05:37.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2889" for this suite. • [SLOW TEST:6.278 seconds] [sig-api-machinery] Secrets /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:36 should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":303,"completed":140,"skipped":2266,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should delete a collection of pods [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:05:37.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should delete a collection of pods [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of pods Nov 18 07:05:37.946: INFO: created test-pod-1 Nov 18 07:05:37.957: INFO: created test-pod-2 Nov 18 07:05:38.007: INFO: created test-pod-3 STEP: waiting for all 3 pods to be located STEP: waiting for all pods to be deleted [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:05:38.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7001" for this suite. •{"msg":"PASSED [k8s.io] Pods should delete a collection of pods [Conformance]","total":303,"completed":141,"skipped":2302,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:05:38.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-810c3bab-c96b-410d-ac8c-bd2b5dc6bd12 STEP: Creating a pod to test consume configMaps Nov 18 07:05:38.324: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-722942e2-4432-4232-9936-0faa885b9fd6" in namespace "projected-2869" to be "Succeeded or Failed" Nov 18 07:05:38.340: INFO: Pod "pod-projected-configmaps-722942e2-4432-4232-9936-0faa885b9fd6": Phase="Pending", Reason="", readiness=false. Elapsed: 16.055647ms Nov 18 07:05:40.348: INFO: Pod "pod-projected-configmaps-722942e2-4432-4232-9936-0faa885b9fd6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024042597s Nov 18 07:05:42.356: INFO: Pod "pod-projected-configmaps-722942e2-4432-4232-9936-0faa885b9fd6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032312969s STEP: Saw pod success Nov 18 07:05:42.356: INFO: Pod "pod-projected-configmaps-722942e2-4432-4232-9936-0faa885b9fd6" satisfied condition "Succeeded or Failed" Nov 18 07:05:42.361: INFO: Trying to get logs from node leguer-worker pod pod-projected-configmaps-722942e2-4432-4232-9936-0faa885b9fd6 container projected-configmap-volume-test: STEP: delete the pod Nov 18 07:05:42.455: INFO: Waiting for pod pod-projected-configmaps-722942e2-4432-4232-9936-0faa885b9fd6 to disappear Nov 18 07:05:42.464: INFO: Pod pod-projected-configmaps-722942e2-4432-4232-9936-0faa885b9fd6 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:05:42.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2869" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":303,"completed":142,"skipped":2366,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:05:42.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Nov 18 07:05:47.267: INFO: Successfully updated pod "annotationupdate7e322792-ae61-431b-81c2-b3e9327382a8" [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:05:51.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4718" for this suite. • [SLOW TEST:8.738 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":303,"completed":143,"skipped":2371,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:05:51.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:05:51.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1036" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":303,"completed":144,"skipped":2383,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:05:51.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 18 07:05:51.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Nov 18 07:05:52.153: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-11-18T07:05:52Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-11-18T07:05:52Z]] name:name1 resourceVersion:11996562 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:89ba4af9-5865-4a79-a512-fbdfc2d75852] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Nov 18 07:06:02.167: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-11-18T07:06:02Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-11-18T07:06:02Z]] name:name2 resourceVersion:11996607 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:1fde7237-e943-4144-a47b-65f1648c3d44] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Nov 18 07:06:12.181: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-11-18T07:05:52Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-11-18T07:06:12Z]] name:name1 resourceVersion:11996639 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:89ba4af9-5865-4a79-a512-fbdfc2d75852] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Nov 18 07:06:22.194: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-11-18T07:06:02Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-11-18T07:06:22Z]] name:name2 resourceVersion:11996671 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:1fde7237-e943-4144-a47b-65f1648c3d44] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Nov 18 07:06:32.209: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-11-18T07:05:52Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-11-18T07:06:12Z]] name:name1 resourceVersion:11996701 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:89ba4af9-5865-4a79-a512-fbdfc2d75852] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Nov 18 07:06:42.225: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-11-18T07:06:02Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-11-18T07:06:22Z]] name:name2 resourceVersion:11996731 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:1fde7237-e943-4144-a47b-65f1648c3d44] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:06:52.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-8591" for this suite. • [SLOW TEST:61.395 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":303,"completed":145,"skipped":2398,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:06:52.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override all Nov 18 07:06:52.974: INFO: Waiting up to 5m0s for pod "client-containers-a62eb355-626e-4cd1-9153-c0ed4ad2d2f5" in namespace "containers-1092" to be "Succeeded or Failed" Nov 18 07:06:53.056: INFO: Pod "client-containers-a62eb355-626e-4cd1-9153-c0ed4ad2d2f5": Phase="Pending", Reason="", readiness=false. Elapsed: 81.51936ms Nov 18 07:06:55.064: INFO: Pod "client-containers-a62eb355-626e-4cd1-9153-c0ed4ad2d2f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089319426s Nov 18 07:06:57.071: INFO: Pod "client-containers-a62eb355-626e-4cd1-9153-c0ed4ad2d2f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.097043324s STEP: Saw pod success Nov 18 07:06:57.071: INFO: Pod "client-containers-a62eb355-626e-4cd1-9153-c0ed4ad2d2f5" satisfied condition "Succeeded or Failed" Nov 18 07:06:57.076: INFO: Trying to get logs from node leguer-worker2 pod client-containers-a62eb355-626e-4cd1-9153-c0ed4ad2d2f5 container test-container: STEP: delete the pod Nov 18 07:06:57.113: INFO: Waiting for pod client-containers-a62eb355-626e-4cd1-9153-c0ed4ad2d2f5 to disappear Nov 18 07:06:57.139: INFO: Pod client-containers-a62eb355-626e-4cd1-9153-c0ed4ad2d2f5 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:06:57.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1092" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":303,"completed":146,"skipped":2410,"failed":0} SSSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:06:57.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should find a service from listing all namespaces [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching services [AfterEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:06:57.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7096" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":303,"completed":147,"skipped":2416,"failed":0} SS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:06:57.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 18 07:06:57.362: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Nov 18 07:06:58.451: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:06:58.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5392" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":303,"completed":148,"skipped":2418,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:06:58.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:07:05.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-1449" for this suite. • [SLOW TEST:6.801 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not conflict [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":303,"completed":149,"skipped":2431,"failed":0} S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:07:05.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-6663 [It] should have a working scale subresource [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating statefulset ss in namespace statefulset-6663 Nov 18 07:07:05.768: INFO: Found 0 stateful pods, waiting for 1 Nov 18 07:07:15.777: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Nov 18 07:07:15.845: INFO: Deleting all statefulset in ns statefulset-6663 Nov 18 07:07:15.914: INFO: Scaling statefulset ss to 0 Nov 18 07:07:36.017: INFO: Waiting for statefulset status.replicas updated to 0 Nov 18 07:07:36.022: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:07:36.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6663" for this suite. • [SLOW TEST:30.749 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have a working scale subresource [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":303,"completed":150,"skipped":2432,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:07:36.137: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium Nov 18 07:07:36.283: INFO: Waiting up to 5m0s for pod "pod-cd5a538c-dc56-416d-8b70-d80ef210009e" in namespace "emptydir-4461" to be "Succeeded or Failed" Nov 18 07:07:36.296: INFO: Pod "pod-cd5a538c-dc56-416d-8b70-d80ef210009e": Phase="Pending", Reason="", readiness=false. Elapsed: 12.775592ms Nov 18 07:07:38.637: INFO: Pod "pod-cd5a538c-dc56-416d-8b70-d80ef210009e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.35409125s Nov 18 07:07:40.644: INFO: Pod "pod-cd5a538c-dc56-416d-8b70-d80ef210009e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.36150209s Nov 18 07:07:42.652: INFO: Pod "pod-cd5a538c-dc56-416d-8b70-d80ef210009e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.369694845s STEP: Saw pod success Nov 18 07:07:42.653: INFO: Pod "pod-cd5a538c-dc56-416d-8b70-d80ef210009e" satisfied condition "Succeeded or Failed" Nov 18 07:07:42.658: INFO: Trying to get logs from node leguer-worker pod pod-cd5a538c-dc56-416d-8b70-d80ef210009e container test-container: STEP: delete the pod Nov 18 07:07:42.695: INFO: Waiting for pod pod-cd5a538c-dc56-416d-8b70-d80ef210009e to disappear Nov 18 07:07:42.719: INFO: Pod pod-cd5a538c-dc56-416d-8b70-d80ef210009e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:07:42.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4461" for this suite. • [SLOW TEST:6.598 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":151,"skipped":2435,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:07:42.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Nov 18 07:07:49.912: INFO: 7 pods remaining Nov 18 07:07:49.913: INFO: 0 pods has nil DeletionTimestamp Nov 18 07:07:49.913: INFO: Nov 18 07:07:51.010: INFO: 0 pods remaining Nov 18 07:07:51.010: INFO: 0 pods has nil DeletionTimestamp Nov 18 07:07:51.010: INFO: STEP: Gathering metrics W1118 07:07:51.307063 10 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 18 07:08:53.334: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:08:53.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2827" for this suite. • [SLOW TEST:70.611 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":303,"completed":152,"skipped":2484,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:08:53.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:09:09.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6889" for this suite. • [SLOW TEST:16.271 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":303,"completed":153,"skipped":2496,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:09:09.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-569 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-569 I1118 07:09:09.973984 10 runners.go:190] Created replication controller with name: externalname-service, namespace: services-569, replica count: 2 I1118 07:09:13.025350 10 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1118 07:09:16.026070 10 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1118 07:09:19.026854 10 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 18 07:09:19.027: INFO: Creating new exec pod Nov 18 07:09:24.096: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-569 execpod2lwpl -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Nov 18 07:09:25.824: INFO: stderr: "I1118 07:09:25.682793 1945 log.go:181] (0x40001a40b0) (0x4000e98000) Create stream\nI1118 07:09:25.686820 1945 log.go:181] (0x40001a40b0) (0x4000e98000) Stream added, broadcasting: 1\nI1118 07:09:25.696456 1945 log.go:181] (0x40001a40b0) Reply frame received for 1\nI1118 07:09:25.697467 1945 log.go:181] (0x40001a40b0) (0x40004d4f00) Create stream\nI1118 07:09:25.697532 1945 log.go:181] (0x40001a40b0) (0x40004d4f00) Stream added, broadcasting: 3\nI1118 07:09:25.698797 1945 log.go:181] (0x40001a40b0) Reply frame received for 3\nI1118 07:09:25.699013 1945 log.go:181] (0x40001a40b0) (0x40004d5400) Create stream\nI1118 07:09:25.699070 1945 log.go:181] (0x40001a40b0) (0x40004d5400) Stream added, broadcasting: 5\nI1118 07:09:25.700345 1945 log.go:181] (0x40001a40b0) Reply frame received for 5\nI1118 07:09:25.803959 1945 log.go:181] (0x40001a40b0) Data frame received for 3\nI1118 07:09:25.804375 1945 log.go:181] (0x40001a40b0) Data frame received for 5\nI1118 07:09:25.804576 1945 log.go:181] (0x40004d5400) (5) Data frame handling\nI1118 07:09:25.804934 1945 log.go:181] (0x40004d4f00) (3) Data frame handling\nI1118 07:09:25.805700 1945 log.go:181] (0x40001a40b0) Data frame received for 1\nI1118 07:09:25.805782 1945 log.go:181] (0x4000e98000) (1) Data frame handling\nI1118 07:09:25.806268 1945 log.go:181] (0x4000e98000) (1) Data frame sent\nI1118 07:09:25.806800 1945 log.go:181] (0x40004d5400) (5) Data frame sent\nI1118 07:09:25.806886 1945 log.go:181] (0x40001a40b0) Data frame received for 5\nI1118 07:09:25.806978 1945 log.go:181] (0x40004d5400) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI1118 07:09:25.810504 1945 log.go:181] (0x40001a40b0) (0x4000e98000) Stream removed, broadcasting: 1\nI1118 07:09:25.811828 1945 log.go:181] (0x40001a40b0) Go away received\nI1118 07:09:25.814828 1945 log.go:181] (0x40001a40b0) (0x4000e98000) Stream removed, broadcasting: 1\nI1118 07:09:25.815172 1945 log.go:181] (0x40001a40b0) (0x40004d4f00) Stream removed, broadcasting: 3\nI1118 07:09:25.815408 1945 log.go:181] (0x40001a40b0) (0x40004d5400) Stream removed, broadcasting: 5\n" Nov 18 07:09:25.825: INFO: stdout: "" Nov 18 07:09:25.829: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-569 execpod2lwpl -- /bin/sh -x -c nc -zv -t -w 2 10.106.106.246 80' Nov 18 07:09:27.439: INFO: stderr: "I1118 07:09:27.309793 1966 log.go:181] (0x40001a8370) (0x4000f8e000) Create stream\nI1118 07:09:27.312411 1966 log.go:181] (0x40001a8370) (0x4000f8e000) Stream added, broadcasting: 1\nI1118 07:09:27.326098 1966 log.go:181] (0x40001a8370) Reply frame received for 1\nI1118 07:09:27.327343 1966 log.go:181] (0x40001a8370) (0x4000e80000) Create stream\nI1118 07:09:27.327459 1966 log.go:181] (0x40001a8370) (0x4000e80000) Stream added, broadcasting: 3\nI1118 07:09:27.329906 1966 log.go:181] (0x40001a8370) Reply frame received for 3\nI1118 07:09:27.330219 1966 log.go:181] (0x40001a8370) (0x40000de960) Create stream\nI1118 07:09:27.330293 1966 log.go:181] (0x40001a8370) (0x40000de960) Stream added, broadcasting: 5\nI1118 07:09:27.331868 1966 log.go:181] (0x40001a8370) Reply frame received for 5\nI1118 07:09:27.419062 1966 log.go:181] (0x40001a8370) Data frame received for 3\nI1118 07:09:27.419482 1966 log.go:181] (0x40001a8370) Data frame received for 5\nI1118 07:09:27.419871 1966 log.go:181] (0x40000de960) (5) Data frame handling\nI1118 07:09:27.420736 1966 log.go:181] (0x40001a8370) Data frame received for 1\nI1118 07:09:27.421065 1966 log.go:181] (0x4000e80000) (3) Data frame handling\nI1118 07:09:27.421306 1966 log.go:181] (0x4000f8e000) (1) Data frame handling\nI1118 07:09:27.421483 1966 log.go:181] (0x40000de960) (5) Data frame sent\nI1118 07:09:27.421882 1966 log.go:181] (0x4000f8e000) (1) Data frame sent\n+ nc -zv -t -w 2 10.106.106.246 80\nConnection to 10.106.106.246 80 port [tcp/http] succeeded!\nI1118 07:09:27.423949 1966 log.go:181] (0x40001a8370) (0x4000f8e000) Stream removed, broadcasting: 1\nI1118 07:09:27.424029 1966 log.go:181] (0x40001a8370) Data frame received for 5\nI1118 07:09:27.425678 1966 log.go:181] (0x40000de960) (5) Data frame handling\nI1118 07:09:27.426843 1966 log.go:181] (0x40001a8370) Go away received\nI1118 07:09:27.429894 1966 log.go:181] (0x40001a8370) (0x4000f8e000) Stream removed, broadcasting: 1\nI1118 07:09:27.430202 1966 log.go:181] (0x40001a8370) (0x4000e80000) Stream removed, broadcasting: 3\nI1118 07:09:27.430421 1966 log.go:181] (0x40001a8370) (0x40000de960) Stream removed, broadcasting: 5\n" Nov 18 07:09:27.439: INFO: stdout: "" Nov 18 07:09:27.440: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-569 execpod2lwpl -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.18 30279' Nov 18 07:09:29.119: INFO: stderr: "I1118 07:09:28.989605 1987 log.go:181] (0x400002fc30) (0x4000c5e6e0) Create stream\nI1118 07:09:28.993546 1987 log.go:181] (0x400002fc30) (0x4000c5e6e0) Stream added, broadcasting: 1\nI1118 07:09:29.015847 1987 log.go:181] (0x400002fc30) Reply frame received for 1\nI1118 07:09:29.016504 1987 log.go:181] (0x400002fc30) (0x40001f4000) Create stream\nI1118 07:09:29.016570 1987 log.go:181] (0x400002fc30) (0x40001f4000) Stream added, broadcasting: 3\nI1118 07:09:29.017861 1987 log.go:181] (0x400002fc30) Reply frame received for 3\nI1118 07:09:29.018137 1987 log.go:181] (0x400002fc30) (0x4000c5e0a0) Create stream\nI1118 07:09:29.018206 1987 log.go:181] (0x400002fc30) (0x4000c5e0a0) Stream added, broadcasting: 5\nI1118 07:09:29.019130 1987 log.go:181] (0x400002fc30) Reply frame received for 5\nI1118 07:09:29.099976 1987 log.go:181] (0x400002fc30) Data frame received for 5\nI1118 07:09:29.100183 1987 log.go:181] (0x400002fc30) Data frame received for 3\nI1118 07:09:29.100278 1987 log.go:181] (0x4000c5e0a0) (5) Data frame handling\nI1118 07:09:29.100450 1987 log.go:181] (0x40001f4000) (3) Data frame handling\nI1118 07:09:29.101166 1987 log.go:181] (0x400002fc30) Data frame received for 1\nI1118 07:09:29.101255 1987 log.go:181] (0x4000c5e6e0) (1) Data frame handling\nI1118 07:09:29.101568 1987 log.go:181] (0x4000c5e0a0) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.18 30279\nI1118 07:09:29.102393 1987 log.go:181] (0x4000c5e6e0) (1) Data frame sent\nI1118 07:09:29.102810 1987 log.go:181] (0x400002fc30) Data frame received for 5\nI1118 07:09:29.102928 1987 log.go:181] (0x4000c5e0a0) (5) Data frame handling\nConnection to 172.18.0.18 30279 port [tcp/30279] succeeded!\nI1118 07:09:29.103063 1987 log.go:181] (0x4000c5e0a0) (5) Data frame sent\nI1118 07:09:29.103226 1987 log.go:181] (0x400002fc30) Data frame received for 5\nI1118 07:09:29.103292 1987 log.go:181] (0x4000c5e0a0) (5) Data frame handling\nI1118 07:09:29.104281 1987 log.go:181] (0x400002fc30) (0x4000c5e6e0) Stream removed, broadcasting: 1\nI1118 07:09:29.106895 1987 log.go:181] (0x400002fc30) Go away received\nI1118 07:09:29.109575 1987 log.go:181] (0x400002fc30) (0x4000c5e6e0) Stream removed, broadcasting: 1\nI1118 07:09:29.109925 1987 log.go:181] (0x400002fc30) (0x40001f4000) Stream removed, broadcasting: 3\nI1118 07:09:29.110129 1987 log.go:181] (0x400002fc30) (0x4000c5e0a0) Stream removed, broadcasting: 5\n" Nov 18 07:09:29.120: INFO: stdout: "" Nov 18 07:09:29.120: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-569 execpod2lwpl -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.17 30279' Nov 18 07:09:30.794: INFO: stderr: "I1118 07:09:30.697467 2007 log.go:181] (0x4000c86000) (0x40001c77c0) Create stream\nI1118 07:09:30.701875 2007 log.go:181] (0x4000c86000) (0x40001c77c0) Stream added, broadcasting: 1\nI1118 07:09:30.716235 2007 log.go:181] (0x4000c86000) Reply frame received for 1\nI1118 07:09:30.717059 2007 log.go:181] (0x4000c86000) (0x4000408140) Create stream\nI1118 07:09:30.717142 2007 log.go:181] (0x4000c86000) (0x4000408140) Stream added, broadcasting: 3\nI1118 07:09:30.718771 2007 log.go:181] (0x4000c86000) Reply frame received for 3\nI1118 07:09:30.719221 2007 log.go:181] (0x4000c86000) (0x4000926320) Create stream\nI1118 07:09:30.719313 2007 log.go:181] (0x4000c86000) (0x4000926320) Stream added, broadcasting: 5\nI1118 07:09:30.720799 2007 log.go:181] (0x4000c86000) Reply frame received for 5\nI1118 07:09:30.772324 2007 log.go:181] (0x4000c86000) Data frame received for 5\nI1118 07:09:30.772920 2007 log.go:181] (0x4000c86000) Data frame received for 3\nI1118 07:09:30.773062 2007 log.go:181] (0x4000408140) (3) Data frame handling\nI1118 07:09:30.773240 2007 log.go:181] (0x4000926320) (5) Data frame handling\nI1118 07:09:30.773488 2007 log.go:181] (0x4000c86000) Data frame received for 1\nI1118 07:09:30.773632 2007 log.go:181] (0x40001c77c0) (1) Data frame handling\nI1118 07:09:30.776492 2007 log.go:181] (0x4000926320) (5) Data frame sent\nI1118 07:09:30.776698 2007 log.go:181] (0x40001c77c0) (1) Data frame sent\n+ nc -zv -t -w 2 172.18.0.17 30279\nConnection to 172.18.0.17 30279 port [tcp/30279] succeeded!\nI1118 07:09:30.777367 2007 log.go:181] (0x4000c86000) Data frame received for 5\nI1118 07:09:30.777504 2007 log.go:181] (0x4000926320) (5) Data frame handling\nI1118 07:09:30.779042 2007 log.go:181] (0x4000c86000) (0x40001c77c0) Stream removed, broadcasting: 1\nI1118 07:09:30.780822 2007 log.go:181] (0x4000c86000) Go away received\nI1118 07:09:30.783750 2007 log.go:181] (0x4000c86000) (0x40001c77c0) Stream removed, broadcasting: 1\nI1118 07:09:30.783990 2007 log.go:181] (0x4000c86000) (0x4000408140) Stream removed, broadcasting: 3\nI1118 07:09:30.784144 2007 log.go:181] (0x4000c86000) (0x4000926320) Stream removed, broadcasting: 5\n" Nov 18 07:09:30.796: INFO: stdout: "" Nov 18 07:09:30.796: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:09:30.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-569" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:21.230 seconds] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":303,"completed":154,"skipped":2506,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:09:30.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath Nov 18 07:09:34.989: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-3223 PodName:var-expansion-6b947fae-42b0-4833-a657-f2fe4d45a47d ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 18 07:09:34.989: INFO: >>> kubeConfig: /root/.kube/config I1118 07:09:35.045403 10 log.go:181] (0x40003d2000) (0x4002cbd5e0) Create stream I1118 07:09:35.045576 10 log.go:181] (0x40003d2000) (0x4002cbd5e0) Stream added, broadcasting: 1 I1118 07:09:35.049711 10 log.go:181] (0x40003d2000) Reply frame received for 1 I1118 07:09:35.049878 10 log.go:181] (0x40003d2000) (0x4002cbd680) Create stream I1118 07:09:35.049971 10 log.go:181] (0x40003d2000) (0x4002cbd680) Stream added, broadcasting: 3 I1118 07:09:35.051332 10 log.go:181] (0x40003d2000) Reply frame received for 3 I1118 07:09:35.051463 10 log.go:181] (0x40003d2000) (0x400408a000) Create stream I1118 07:09:35.051531 10 log.go:181] (0x40003d2000) (0x400408a000) Stream added, broadcasting: 5 I1118 07:09:35.053440 10 log.go:181] (0x40003d2000) Reply frame received for 5 I1118 07:09:35.119595 10 log.go:181] (0x40003d2000) Data frame received for 5 I1118 07:09:35.119750 10 log.go:181] (0x400408a000) (5) Data frame handling I1118 07:09:35.119923 10 log.go:181] (0x40003d2000) Data frame received for 3 I1118 07:09:35.120040 10 log.go:181] (0x4002cbd680) (3) Data frame handling I1118 07:09:35.120793 10 log.go:181] (0x40003d2000) Data frame received for 1 I1118 07:09:35.120997 10 log.go:181] (0x4002cbd5e0) (1) Data frame handling I1118 07:09:35.121121 10 log.go:181] (0x4002cbd5e0) (1) Data frame sent I1118 07:09:35.121222 10 log.go:181] (0x40003d2000) (0x4002cbd5e0) Stream removed, broadcasting: 1 I1118 07:09:35.121337 10 log.go:181] (0x40003d2000) Go away received I1118 07:09:35.121502 10 log.go:181] (0x40003d2000) (0x4002cbd5e0) Stream removed, broadcasting: 1 I1118 07:09:35.121596 10 log.go:181] (0x40003d2000) (0x4002cbd680) Stream removed, broadcasting: 3 I1118 07:09:35.121670 10 log.go:181] (0x40003d2000) (0x400408a000) Stream removed, broadcasting: 5 STEP: test for file in mounted path Nov 18 07:09:35.126: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-3223 PodName:var-expansion-6b947fae-42b0-4833-a657-f2fe4d45a47d ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 18 07:09:35.127: INFO: >>> kubeConfig: /root/.kube/config I1118 07:09:35.206849 10 log.go:181] (0x40001b0370) (0x4001082460) Create stream I1118 07:09:35.207080 10 log.go:181] (0x40001b0370) (0x4001082460) Stream added, broadcasting: 1 I1118 07:09:35.211483 10 log.go:181] (0x40001b0370) Reply frame received for 1 I1118 07:09:35.211644 10 log.go:181] (0x40001b0370) (0x40035a8960) Create stream I1118 07:09:35.211711 10 log.go:181] (0x40001b0370) (0x40035a8960) Stream added, broadcasting: 3 I1118 07:09:35.213246 10 log.go:181] (0x40001b0370) Reply frame received for 3 I1118 07:09:35.213385 10 log.go:181] (0x40001b0370) (0x4002cbd720) Create stream I1118 07:09:35.213561 10 log.go:181] (0x40001b0370) (0x4002cbd720) Stream added, broadcasting: 5 I1118 07:09:35.215182 10 log.go:181] (0x40001b0370) Reply frame received for 5 I1118 07:09:35.287512 10 log.go:181] (0x40001b0370) Data frame received for 5 I1118 07:09:35.287708 10 log.go:181] (0x4002cbd720) (5) Data frame handling I1118 07:09:35.287815 10 log.go:181] (0x40001b0370) Data frame received for 3 I1118 07:09:35.287951 10 log.go:181] (0x40035a8960) (3) Data frame handling I1118 07:09:35.289474 10 log.go:181] (0x40001b0370) Data frame received for 1 I1118 07:09:35.289600 10 log.go:181] (0x4001082460) (1) Data frame handling I1118 07:09:35.289791 10 log.go:181] (0x4001082460) (1) Data frame sent I1118 07:09:35.289937 10 log.go:181] (0x40001b0370) (0x4001082460) Stream removed, broadcasting: 1 I1118 07:09:35.290089 10 log.go:181] (0x40001b0370) Go away received I1118 07:09:35.290340 10 log.go:181] (0x40001b0370) (0x4001082460) Stream removed, broadcasting: 1 I1118 07:09:35.290422 10 log.go:181] (0x40001b0370) (0x40035a8960) Stream removed, broadcasting: 3 I1118 07:09:35.290485 10 log.go:181] (0x40001b0370) (0x4002cbd720) Stream removed, broadcasting: 5 STEP: updating the annotation value Nov 18 07:09:35.808: INFO: Successfully updated pod "var-expansion-6b947fae-42b0-4833-a657-f2fe4d45a47d" STEP: waiting for annotated pod running STEP: deleting the pod gracefully Nov 18 07:09:35.826: INFO: Deleting pod "var-expansion-6b947fae-42b0-4833-a657-f2fe4d45a47d" in namespace "var-expansion-3223" Nov 18 07:09:35.844: INFO: Wait up to 5m0s for pod "var-expansion-6b947fae-42b0-4833-a657-f2fe4d45a47d" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:10:21.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3223" for this suite. • [SLOW TEST:51.040 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":303,"completed":155,"skipped":2520,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:10:21.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl replace /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1581 [It] should update a single-container pod's image [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Nov 18 07:10:21.966: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-5076' Nov 18 07:10:23.440: INFO: stderr: "" Nov 18 07:10:23.440: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Nov 18 07:10:28.492: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-5076 -o json' Nov 18 07:10:29.840: INFO: stderr: "" Nov 18 07:10:29.841: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-11-18T07:10:23Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl-run\",\n \"operation\": \"Update\",\n \"time\": \"2020-11-18T07:10:23Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.244.1.191\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2020-11-18T07:10:26Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-5076\",\n \"resourceVersion\": \"11997896\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-5076/pods/e2e-test-httpd-pod\",\n \"uid\": \"b3dea446-c1f5-4238-9a6e-2d5b9ac8efde\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-tqljw\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"leguer-worker2\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-tqljw\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-tqljw\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-11-18T07:10:23Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-11-18T07:10:26Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-11-18T07:10:26Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-11-18T07:10:23Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://045e2a02c15de334316d4d107f9062c7c668882c659f84646e55d5c2aecf9880\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-11-18T07:10:26Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.17\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.191\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.191\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-11-18T07:10:23Z\"\n }\n}\n" STEP: replace the image in the pod Nov 18 07:10:29.844: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-5076' Nov 18 07:10:32.478: INFO: stderr: "" Nov 18 07:10:32.479: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1586 Nov 18 07:10:32.483: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-5076' Nov 18 07:10:36.833: INFO: stderr: "" Nov 18 07:10:36.833: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:10:36.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5076" for this suite. • [SLOW TEST:14.962 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1577 should update a single-container pod's image [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":303,"completed":156,"skipped":2530,"failed":0} SS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:10:36.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Nov 18 07:10:36.960: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4094 /api/v1/namespaces/watch-4094/configmaps/e2e-watch-test-configmap-a 7529d528-0b81-4f7a-84a6-f4b7b9b5cfba 11997948 0 2020-11-18 07:10:36 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-11-18 07:10:36 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Nov 18 07:10:36.962: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4094 /api/v1/namespaces/watch-4094/configmaps/e2e-watch-test-configmap-a 7529d528-0b81-4f7a-84a6-f4b7b9b5cfba 11997948 0 2020-11-18 07:10:36 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-11-18 07:10:36 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Nov 18 07:10:46.978: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4094 /api/v1/namespaces/watch-4094/configmaps/e2e-watch-test-configmap-a 7529d528-0b81-4f7a-84a6-f4b7b9b5cfba 11997990 0 2020-11-18 07:10:36 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-11-18 07:10:46 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Nov 18 07:10:46.980: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4094 /api/v1/namespaces/watch-4094/configmaps/e2e-watch-test-configmap-a 7529d528-0b81-4f7a-84a6-f4b7b9b5cfba 11997990 0 2020-11-18 07:10:36 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-11-18 07:10:46 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Nov 18 07:10:56.993: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4094 /api/v1/namespaces/watch-4094/configmaps/e2e-watch-test-configmap-a 7529d528-0b81-4f7a-84a6-f4b7b9b5cfba 11998020 0 2020-11-18 07:10:36 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-11-18 07:10:56 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Nov 18 07:10:56.995: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4094 /api/v1/namespaces/watch-4094/configmaps/e2e-watch-test-configmap-a 7529d528-0b81-4f7a-84a6-f4b7b9b5cfba 11998020 0 2020-11-18 07:10:36 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-11-18 07:10:56 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Nov 18 07:11:07.008: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4094 /api/v1/namespaces/watch-4094/configmaps/e2e-watch-test-configmap-a 7529d528-0b81-4f7a-84a6-f4b7b9b5cfba 11998050 0 2020-11-18 07:10:36 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-11-18 07:10:56 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Nov 18 07:11:07.009: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4094 /api/v1/namespaces/watch-4094/configmaps/e2e-watch-test-configmap-a 7529d528-0b81-4f7a-84a6-f4b7b9b5cfba 11998050 0 2020-11-18 07:10:36 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-11-18 07:10:56 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Nov 18 07:11:17.023: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4094 /api/v1/namespaces/watch-4094/configmaps/e2e-watch-test-configmap-b 0241d685-badb-4594-ab51-f7ff371b82f6 11998080 0 2020-11-18 07:11:17 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-11-18 07:11:17 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Nov 18 07:11:17.024: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4094 /api/v1/namespaces/watch-4094/configmaps/e2e-watch-test-configmap-b 0241d685-badb-4594-ab51-f7ff371b82f6 11998080 0 2020-11-18 07:11:17 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-11-18 07:11:17 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Nov 18 07:11:27.037: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4094 /api/v1/namespaces/watch-4094/configmaps/e2e-watch-test-configmap-b 0241d685-badb-4594-ab51-f7ff371b82f6 11998106 0 2020-11-18 07:11:17 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-11-18 07:11:17 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Nov 18 07:11:27.038: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4094 /api/v1/namespaces/watch-4094/configmaps/e2e-watch-test-configmap-b 0241d685-badb-4594-ab51-f7ff371b82f6 11998106 0 2020-11-18 07:11:17 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-11-18 07:11:17 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:11:37.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4094" for this suite. • [SLOW TEST:60.206 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":303,"completed":157,"skipped":2532,"failed":0} SSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:11:37.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Nov 18 07:11:37.398: INFO: Waiting up to 5m0s for pod "downward-api-1619f0b7-440f-4a45-a859-b170d70b9e50" in namespace "downward-api-1178" to be "Succeeded or Failed" Nov 18 07:11:37.404: INFO: Pod "downward-api-1619f0b7-440f-4a45-a859-b170d70b9e50": Phase="Pending", Reason="", readiness=false. Elapsed: 5.794065ms Nov 18 07:11:39.450: INFO: Pod "downward-api-1619f0b7-440f-4a45-a859-b170d70b9e50": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05161417s Nov 18 07:11:41.456: INFO: Pod "downward-api-1619f0b7-440f-4a45-a859-b170d70b9e50": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058187896s STEP: Saw pod success Nov 18 07:11:41.457: INFO: Pod "downward-api-1619f0b7-440f-4a45-a859-b170d70b9e50" satisfied condition "Succeeded or Failed" Nov 18 07:11:41.466: INFO: Trying to get logs from node leguer-worker2 pod downward-api-1619f0b7-440f-4a45-a859-b170d70b9e50 container dapi-container: STEP: delete the pod Nov 18 07:11:41.521: INFO: Waiting for pod downward-api-1619f0b7-440f-4a45-a859-b170d70b9e50 to disappear Nov 18 07:11:41.550: INFO: Pod downward-api-1619f0b7-440f-4a45-a859-b170d70b9e50 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:11:41.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1178" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":303,"completed":158,"skipped":2542,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:11:41.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-627e18d8-a206-4a36-8024-41bcee982155 STEP: Creating a pod to test consume secrets Nov 18 07:11:41.830: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e2b4d0de-2726-4228-97e1-3d45ab680093" in namespace "projected-2855" to be "Succeeded or Failed" Nov 18 07:11:41.859: INFO: Pod "pod-projected-secrets-e2b4d0de-2726-4228-97e1-3d45ab680093": Phase="Pending", Reason="", readiness=false. Elapsed: 29.557059ms Nov 18 07:11:43.867: INFO: Pod "pod-projected-secrets-e2b4d0de-2726-4228-97e1-3d45ab680093": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036818807s Nov 18 07:11:45.875: INFO: Pod "pod-projected-secrets-e2b4d0de-2726-4228-97e1-3d45ab680093": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04476017s STEP: Saw pod success Nov 18 07:11:45.875: INFO: Pod "pod-projected-secrets-e2b4d0de-2726-4228-97e1-3d45ab680093" satisfied condition "Succeeded or Failed" Nov 18 07:11:45.881: INFO: Trying to get logs from node leguer-worker pod pod-projected-secrets-e2b4d0de-2726-4228-97e1-3d45ab680093 container projected-secret-volume-test: STEP: delete the pod Nov 18 07:11:46.001: INFO: Waiting for pod pod-projected-secrets-e2b4d0de-2726-4228-97e1-3d45ab680093 to disappear Nov 18 07:11:46.005: INFO: Pod pod-projected-secrets-e2b4d0de-2726-4228-97e1-3d45ab680093 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:11:46.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2855" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":159,"skipped":2551,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:11:46.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl label /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1333 STEP: creating the pod Nov 18 07:11:46.165: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3008' Nov 18 07:11:48.903: INFO: stderr: "" Nov 18 07:11:48.903: INFO: stdout: "pod/pause created\n" Nov 18 07:11:48.903: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Nov 18 07:11:48.904: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-3008" to be "running and ready" Nov 18 07:11:48.929: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 24.975945ms Nov 18 07:11:50.937: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032900781s Nov 18 07:11:52.945: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.041199453s Nov 18 07:11:52.945: INFO: Pod "pause" satisfied condition "running and ready" Nov 18 07:11:52.945: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: adding the label testing-label with value testing-label-value to a pod Nov 18 07:11:52.946: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-3008' Nov 18 07:11:54.490: INFO: stderr: "" Nov 18 07:11:54.491: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Nov 18 07:11:54.491: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-3008' Nov 18 07:11:55.901: INFO: stderr: "" Nov 18 07:11:55.901: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 7s testing-label-value\n" STEP: removing the label testing-label of a pod Nov 18 07:11:55.902: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-3008' Nov 18 07:11:57.236: INFO: stderr: "" Nov 18 07:11:57.236: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Nov 18 07:11:57.237: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-3008' Nov 18 07:11:58.596: INFO: stderr: "" Nov 18 07:11:58.596: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 10s \n" [AfterEach] Kubectl label /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1340 STEP: using delete to clean up resources Nov 18 07:11:58.597: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3008' Nov 18 07:12:00.001: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Nov 18 07:12:00.001: INFO: stdout: "pod \"pause\" force deleted\n" Nov 18 07:12:00.002: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-3008' Nov 18 07:12:01.405: INFO: stderr: "No resources found in kubectl-3008 namespace.\n" Nov 18 07:12:01.405: INFO: stdout: "" Nov 18 07:12:01.405: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-3008 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Nov 18 07:12:02.786: INFO: stderr: "" Nov 18 07:12:02.786: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:12:02.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3008" for this suite. • [SLOW TEST:16.786 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1330 should update the label on a resource [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":303,"completed":160,"skipped":2560,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:12:02.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if v1 is in available api versions [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating api versions Nov 18 07:12:02.870: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config api-versions' Nov 18 07:12:04.397: INFO: stderr: "" Nov 18 07:12:04.397: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:12:04.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2516" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":303,"completed":161,"skipped":2573,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:12:04.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should run through a ConfigMap lifecycle [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a ConfigMap STEP: fetching the ConfigMap STEP: patching the ConfigMap STEP: listing all ConfigMaps in all namespaces with a label selector STEP: deleting the ConfigMap by collection with a label selector STEP: listing all ConfigMaps in test namespace [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:12:04.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7553" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":303,"completed":162,"skipped":2587,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected combined /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:12:04.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-projected-all-test-volume-220269a7-bf88-42f8-b215-0bf5d3fa374b STEP: Creating secret with name secret-projected-all-test-volume-e0a3d258-c268-4efb-8a59-a3684a2cb99f STEP: Creating a pod to test Check all projections for projected volume plugin Nov 18 07:12:04.788: INFO: Waiting up to 5m0s for pod "projected-volume-043e3b56-6ce0-4d84-b613-5358071819b7" in namespace "projected-2445" to be "Succeeded or Failed" Nov 18 07:12:04.833: INFO: Pod "projected-volume-043e3b56-6ce0-4d84-b613-5358071819b7": Phase="Pending", Reason="", readiness=false. Elapsed: 44.829461ms Nov 18 07:12:06.917: INFO: Pod "projected-volume-043e3b56-6ce0-4d84-b613-5358071819b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.128530781s Nov 18 07:12:08.926: INFO: Pod "projected-volume-043e3b56-6ce0-4d84-b613-5358071819b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.136984219s STEP: Saw pod success Nov 18 07:12:08.926: INFO: Pod "projected-volume-043e3b56-6ce0-4d84-b613-5358071819b7" satisfied condition "Succeeded or Failed" Nov 18 07:12:08.931: INFO: Trying to get logs from node leguer-worker pod projected-volume-043e3b56-6ce0-4d84-b613-5358071819b7 container projected-all-volume-test: STEP: delete the pod Nov 18 07:12:08.960: INFO: Waiting for pod projected-volume-043e3b56-6ce0-4d84-b613-5358071819b7 to disappear Nov 18 07:12:08.975: INFO: Pod projected-volume-043e3b56-6ce0-4d84-b613-5358071819b7 no longer exists [AfterEach] [sig-storage] Projected combined /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:12:08.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2445" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":303,"completed":163,"skipped":2620,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:12:08.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 18 07:12:09.217: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Nov 18 07:12:30.066: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4451 create -f -' Nov 18 07:12:35.600: INFO: stderr: "" Nov 18 07:12:35.600: INFO: stdout: "e2e-test-crd-publish-openapi-2718-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Nov 18 07:12:35.601: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4451 delete e2e-test-crd-publish-openapi-2718-crds test-cr' Nov 18 07:12:36.950: INFO: stderr: "" Nov 18 07:12:36.950: INFO: stdout: "e2e-test-crd-publish-openapi-2718-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Nov 18 07:12:36.951: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4451 apply -f -' Nov 18 07:12:39.206: INFO: stderr: "" Nov 18 07:12:39.206: INFO: stdout: "e2e-test-crd-publish-openapi-2718-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Nov 18 07:12:39.207: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4451 delete e2e-test-crd-publish-openapi-2718-crds test-cr' Nov 18 07:12:40.563: INFO: stderr: "" Nov 18 07:12:40.563: INFO: stdout: "e2e-test-crd-publish-openapi-2718-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Nov 18 07:12:40.563: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2718-crds' Nov 18 07:12:44.347: INFO: stderr: "" Nov 18 07:12:44.347: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2718-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:13:05.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4451" for this suite. • [SLOW TEST:56.611 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":303,"completed":164,"skipped":2644,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:13:05.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:13:36.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4741" for this suite. STEP: Destroying namespace "nsdeletetest-3575" for this suite. Nov 18 07:13:36.994: INFO: Namespace nsdeletetest-3575 was already deleted STEP: Destroying namespace "nsdeletetest-9305" for this suite. • [SLOW TEST:31.403 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":303,"completed":165,"skipped":2655,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:13:37.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 18 07:13:39.467: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 18 07:13:41.489: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741280419, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741280419, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741280419, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741280419, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 18 07:13:43.496: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741280419, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741280419, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741280419, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741280419, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 18 07:13:46.590: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 18 07:13:46.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1417-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:13:47.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3245" for this suite. STEP: Destroying namespace "webhook-3245-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.954 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":303,"completed":166,"skipped":2663,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:13:47.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 18 07:13:48.051: INFO: The status of Pod test-webserver-07c0ad56-f66b-4092-a000-80741eebeb25 is Pending, waiting for it to be Running (with Ready = true) Nov 18 07:13:51.831: INFO: The status of Pod test-webserver-07c0ad56-f66b-4092-a000-80741eebeb25 is Pending, waiting for it to be Running (with Ready = true) Nov 18 07:13:52.236: INFO: The status of Pod test-webserver-07c0ad56-f66b-4092-a000-80741eebeb25 is Pending, waiting for it to be Running (with Ready = true) Nov 18 07:13:54.060: INFO: The status of Pod test-webserver-07c0ad56-f66b-4092-a000-80741eebeb25 is Running (Ready = false) Nov 18 07:13:56.062: INFO: The status of Pod test-webserver-07c0ad56-f66b-4092-a000-80741eebeb25 is Running (Ready = false) Nov 18 07:13:58.059: INFO: The status of Pod test-webserver-07c0ad56-f66b-4092-a000-80741eebeb25 is Running (Ready = false) Nov 18 07:14:00.060: INFO: The status of Pod test-webserver-07c0ad56-f66b-4092-a000-80741eebeb25 is Running (Ready = false) Nov 18 07:14:02.059: INFO: The status of Pod test-webserver-07c0ad56-f66b-4092-a000-80741eebeb25 is Running (Ready = false) Nov 18 07:14:04.060: INFO: The status of Pod test-webserver-07c0ad56-f66b-4092-a000-80741eebeb25 is Running (Ready = false) Nov 18 07:14:06.062: INFO: The status of Pod test-webserver-07c0ad56-f66b-4092-a000-80741eebeb25 is Running (Ready = false) Nov 18 07:14:08.059: INFO: The status of Pod test-webserver-07c0ad56-f66b-4092-a000-80741eebeb25 is Running (Ready = false) Nov 18 07:14:10.060: INFO: The status of Pod test-webserver-07c0ad56-f66b-4092-a000-80741eebeb25 is Running (Ready = false) Nov 18 07:14:12.059: INFO: The status of Pod test-webserver-07c0ad56-f66b-4092-a000-80741eebeb25 is Running (Ready = true) Nov 18 07:14:12.066: INFO: Container started at 2020-11-18 07:13:53 +0000 UTC, pod became ready at 2020-11-18 07:14:10 +0000 UTC [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:14:12.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9819" for this suite. • [SLOW TEST:24.119 seconds] [k8s.io] Probing container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":303,"completed":167,"skipped":2678,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:14:12.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 18 07:14:15.915: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 18 07:14:18.073: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741280455, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741280455, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741280456, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741280455, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 18 07:14:21.134: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:14:21.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4974" for this suite. STEP: Destroying namespace "webhook-4974-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.433 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":303,"completed":168,"skipped":2680,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:14:21.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on node default medium Nov 18 07:14:21.588: INFO: Waiting up to 5m0s for pod "pod-5d3214c9-88e6-4a25-91a7-d9ad38f36fb1" in namespace "emptydir-7846" to be "Succeeded or Failed" Nov 18 07:14:21.596: INFO: Pod "pod-5d3214c9-88e6-4a25-91a7-d9ad38f36fb1": Phase="Pending", Reason="", readiness=false. Elapsed: 7.61431ms Nov 18 07:14:23.646: INFO: Pod "pod-5d3214c9-88e6-4a25-91a7-d9ad38f36fb1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057735846s Nov 18 07:14:25.791: INFO: Pod "pod-5d3214c9-88e6-4a25-91a7-d9ad38f36fb1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.202871231s Nov 18 07:14:27.799: INFO: Pod "pod-5d3214c9-88e6-4a25-91a7-d9ad38f36fb1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.210328936s STEP: Saw pod success Nov 18 07:14:27.799: INFO: Pod "pod-5d3214c9-88e6-4a25-91a7-d9ad38f36fb1" satisfied condition "Succeeded or Failed" Nov 18 07:14:27.804: INFO: Trying to get logs from node leguer-worker pod pod-5d3214c9-88e6-4a25-91a7-d9ad38f36fb1 container test-container: STEP: delete the pod Nov 18 07:14:27.878: INFO: Waiting for pod pod-5d3214c9-88e6-4a25-91a7-d9ad38f36fb1 to disappear Nov 18 07:14:27.893: INFO: Pod pod-5d3214c9-88e6-4a25-91a7-d9ad38f36fb1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:14:27.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7846" for this suite. • [SLOW TEST:6.385 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":169,"skipped":2685,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:14:27.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-34f4ca5d-5f42-49b3-9321-22152e3f3877 STEP: Creating a pod to test consume configMaps Nov 18 07:14:28.009: INFO: Waiting up to 5m0s for pod "pod-configmaps-cd9fb0e0-a80e-47a0-a97f-eedf8866df4d" in namespace "configmap-5238" to be "Succeeded or Failed" Nov 18 07:14:28.021: INFO: Pod "pod-configmaps-cd9fb0e0-a80e-47a0-a97f-eedf8866df4d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.081052ms Nov 18 07:14:30.029: INFO: Pod "pod-configmaps-cd9fb0e0-a80e-47a0-a97f-eedf8866df4d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02033338s Nov 18 07:14:32.036: INFO: Pod "pod-configmaps-cd9fb0e0-a80e-47a0-a97f-eedf8866df4d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026954055s STEP: Saw pod success Nov 18 07:14:32.036: INFO: Pod "pod-configmaps-cd9fb0e0-a80e-47a0-a97f-eedf8866df4d" satisfied condition "Succeeded or Failed" Nov 18 07:14:32.040: INFO: Trying to get logs from node leguer-worker2 pod pod-configmaps-cd9fb0e0-a80e-47a0-a97f-eedf8866df4d container configmap-volume-test: STEP: delete the pod Nov 18 07:14:32.115: INFO: Waiting for pod pod-configmaps-cd9fb0e0-a80e-47a0-a97f-eedf8866df4d to disappear Nov 18 07:14:32.139: INFO: Pod pod-configmaps-cd9fb0e0-a80e-47a0-a97f-eedf8866df4d no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:14:32.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5238" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":170,"skipped":2706,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:14:32.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Nov 18 07:14:32.284: INFO: PodSpec: initContainers in spec.initContainers Nov 18 07:15:25.125: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-23560469-54b5-4b1b-b99f-3be8a4f0d3a8", GenerateName:"", Namespace:"init-container-7781", SelfLink:"/api/v1/namespaces/init-container-7781/pods/pod-init-23560469-54b5-4b1b-b99f-3be8a4f0d3a8", UID:"2c557449-2c7b-422d-b5b3-ce4308f5ce90", ResourceVersion:"11999231", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63741280472, loc:(*time.Location)(0x6e4d0a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"283030706"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0x4003e524c0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4003e524e0)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0x4003e52500), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4003e52520)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-xh5gf", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0x40060ec100), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-xh5gf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-xh5gf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-xh5gf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4003f3e2b8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"leguer-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x400290c150), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0x4003f3e340)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0x4003f3e360)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0x4003f3e368), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0x4003f3e36c), PreemptionPolicy:(*v1.PreemptionPolicy)(0x4004704070), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741280472, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741280472, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741280472, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741280472, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.18", PodIP:"10.244.2.124", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.124"}}, StartTime:(*v1.Time)(0x4003e52540), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0x4003e52580), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0x400290c230)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://6c45176ac874afdbaa599929e3de5bea78ecb2cb7ae1875bbb67c9709f14c000", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0x4003e525a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0x4003e52560), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0x4003f3e3ef)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:15:25.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7781" for this suite. • [SLOW TEST:53.127 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":303,"completed":171,"skipped":2715,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:15:25.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 18 07:15:27.532: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 18 07:15:29.553: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741280527, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741280527, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741280527, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741280527, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 18 07:15:32.632: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:15:44.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-717" for this suite. STEP: Destroying namespace "webhook-717-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:19.770 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":303,"completed":172,"skipped":2717,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:15:45.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Nov 18 07:15:45.206: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 07:15:45.219: INFO: Number of nodes with available pods: 0 Nov 18 07:15:45.219: INFO: Node leguer-worker is running more than one daemon pod Nov 18 07:15:46.462: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 07:15:46.713: INFO: Number of nodes with available pods: 0 Nov 18 07:15:46.714: INFO: Node leguer-worker is running more than one daemon pod Nov 18 07:15:47.233: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 07:15:47.239: INFO: Number of nodes with available pods: 0 Nov 18 07:15:47.239: INFO: Node leguer-worker is running more than one daemon pod Nov 18 07:15:48.230: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 07:15:48.238: INFO: Number of nodes with available pods: 0 Nov 18 07:15:48.238: INFO: Node leguer-worker is running more than one daemon pod Nov 18 07:15:49.248: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 07:15:49.255: INFO: Number of nodes with available pods: 1 Nov 18 07:15:49.255: INFO: Node leguer-worker2 is running more than one daemon pod Nov 18 07:15:50.231: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 07:15:50.318: INFO: Number of nodes with available pods: 2 Nov 18 07:15:50.319: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Nov 18 07:15:50.390: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 07:15:50.396: INFO: Number of nodes with available pods: 1 Nov 18 07:15:50.396: INFO: Node leguer-worker is running more than one daemon pod Nov 18 07:15:51.405: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 07:15:51.414: INFO: Number of nodes with available pods: 1 Nov 18 07:15:51.414: INFO: Node leguer-worker is running more than one daemon pod Nov 18 07:15:53.057: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 07:15:53.098: INFO: Number of nodes with available pods: 1 Nov 18 07:15:53.098: INFO: Node leguer-worker is running more than one daemon pod Nov 18 07:15:53.435: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 07:15:53.442: INFO: Number of nodes with available pods: 1 Nov 18 07:15:53.442: INFO: Node leguer-worker is running more than one daemon pod Nov 18 07:15:54.409: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 07:15:54.417: INFO: Number of nodes with available pods: 1 Nov 18 07:15:54.418: INFO: Node leguer-worker is running more than one daemon pod Nov 18 07:15:55.405: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 07:15:55.411: INFO: Number of nodes with available pods: 1 Nov 18 07:15:55.411: INFO: Node leguer-worker is running more than one daemon pod Nov 18 07:15:56.410: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 07:15:56.417: INFO: Number of nodes with available pods: 1 Nov 18 07:15:56.418: INFO: Node leguer-worker is running more than one daemon pod Nov 18 07:15:57.450: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 07:15:57.456: INFO: Number of nodes with available pods: 1 Nov 18 07:15:57.456: INFO: Node leguer-worker is running more than one daemon pod Nov 18 07:15:58.410: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 07:15:58.418: INFO: Number of nodes with available pods: 1 Nov 18 07:15:58.418: INFO: Node leguer-worker is running more than one daemon pod Nov 18 07:15:59.438: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 07:15:59.446: INFO: Number of nodes with available pods: 1 Nov 18 07:15:59.446: INFO: Node leguer-worker is running more than one daemon pod Nov 18 07:16:00.433: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 07:16:00.462: INFO: Number of nodes with available pods: 1 Nov 18 07:16:00.462: INFO: Node leguer-worker is running more than one daemon pod Nov 18 07:16:01.431: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 07:16:01.469: INFO: Number of nodes with available pods: 1 Nov 18 07:16:01.469: INFO: Node leguer-worker is running more than one daemon pod Nov 18 07:16:02.411: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 07:16:02.419: INFO: Number of nodes with available pods: 1 Nov 18 07:16:02.419: INFO: Node leguer-worker is running more than one daemon pod Nov 18 07:16:03.408: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 07:16:03.415: INFO: Number of nodes with available pods: 2 Nov 18 07:16:03.415: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8751, will wait for the garbage collector to delete the pods Nov 18 07:16:03.489: INFO: Deleting DaemonSet.extensions daemon-set took: 11.648028ms Nov 18 07:16:03.990: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.731328ms Nov 18 07:16:20.298: INFO: Number of nodes with available pods: 0 Nov 18 07:16:20.298: INFO: Number of running nodes: 0, number of available pods: 0 Nov 18 07:16:20.305: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8751/daemonsets","resourceVersion":"11999536"},"items":null} Nov 18 07:16:20.326: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8751/pods","resourceVersion":"11999536"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:16:20.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8751" for this suite. • [SLOW TEST:35.304 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":303,"completed":173,"skipped":2737,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:16:20.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-86eb9b6d-4eb9-474b-b49b-a738c244c32d STEP: Creating a pod to test consume configMaps Nov 18 07:16:20.457: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3df041b1-1d30-4fb3-b092-a621be80594f" in namespace "projected-6121" to be "Succeeded or Failed" Nov 18 07:16:20.495: INFO: Pod "pod-projected-configmaps-3df041b1-1d30-4fb3-b092-a621be80594f": Phase="Pending", Reason="", readiness=false. Elapsed: 38.282614ms Nov 18 07:16:22.705: INFO: Pod "pod-projected-configmaps-3df041b1-1d30-4fb3-b092-a621be80594f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.248120741s Nov 18 07:16:24.713: INFO: Pod "pod-projected-configmaps-3df041b1-1d30-4fb3-b092-a621be80594f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.256362919s STEP: Saw pod success Nov 18 07:16:24.714: INFO: Pod "pod-projected-configmaps-3df041b1-1d30-4fb3-b092-a621be80594f" satisfied condition "Succeeded or Failed" Nov 18 07:16:24.719: INFO: Trying to get logs from node leguer-worker2 pod pod-projected-configmaps-3df041b1-1d30-4fb3-b092-a621be80594f container projected-configmap-volume-test: STEP: delete the pod Nov 18 07:16:24.774: INFO: Waiting for pod pod-projected-configmaps-3df041b1-1d30-4fb3-b092-a621be80594f to disappear Nov 18 07:16:24.801: INFO: Pod pod-projected-configmaps-3df041b1-1d30-4fb3-b092-a621be80594f no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:16:24.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6121" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":174,"skipped":2746,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:16:24.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs Nov 18 07:16:24.958: INFO: Waiting up to 5m0s for pod "pod-969d0e72-8a75-42ba-8357-312e49d83d09" in namespace "emptydir-9905" to be "Succeeded or Failed" Nov 18 07:16:24.970: INFO: Pod "pod-969d0e72-8a75-42ba-8357-312e49d83d09": Phase="Pending", Reason="", readiness=false. Elapsed: 12.252293ms Nov 18 07:16:26.977: INFO: Pod "pod-969d0e72-8a75-42ba-8357-312e49d83d09": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019498748s Nov 18 07:16:28.986: INFO: Pod "pod-969d0e72-8a75-42ba-8357-312e49d83d09": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027820379s STEP: Saw pod success Nov 18 07:16:28.986: INFO: Pod "pod-969d0e72-8a75-42ba-8357-312e49d83d09" satisfied condition "Succeeded or Failed" Nov 18 07:16:28.991: INFO: Trying to get logs from node leguer-worker2 pod pod-969d0e72-8a75-42ba-8357-312e49d83d09 container test-container: STEP: delete the pod Nov 18 07:16:29.013: INFO: Waiting for pod pod-969d0e72-8a75-42ba-8357-312e49d83d09 to disappear Nov 18 07:16:29.017: INFO: Pod pod-969d0e72-8a75-42ba-8357-312e49d83d09 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:16:29.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9905" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":175,"skipped":2760,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] PodTemplates /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:16:29.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-node] PodTemplates /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:16:29.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-3917" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":303,"completed":176,"skipped":2790,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:16:29.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Nov 18 07:16:29.364: INFO: Waiting up to 5m0s for pod "downwardapi-volume-11d6ec29-a096-4001-9e91-e5c0bcb24672" in namespace "downward-api-1925" to be "Succeeded or Failed" Nov 18 07:16:29.409: INFO: Pod "downwardapi-volume-11d6ec29-a096-4001-9e91-e5c0bcb24672": Phase="Pending", Reason="", readiness=false. Elapsed: 44.946591ms Nov 18 07:16:31.416: INFO: Pod "downwardapi-volume-11d6ec29-a096-4001-9e91-e5c0bcb24672": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052044072s Nov 18 07:16:33.425: INFO: Pod "downwardapi-volume-11d6ec29-a096-4001-9e91-e5c0bcb24672": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061148097s STEP: Saw pod success Nov 18 07:16:33.425: INFO: Pod "downwardapi-volume-11d6ec29-a096-4001-9e91-e5c0bcb24672" satisfied condition "Succeeded or Failed" Nov 18 07:16:33.433: INFO: Trying to get logs from node leguer-worker2 pod downwardapi-volume-11d6ec29-a096-4001-9e91-e5c0bcb24672 container client-container: STEP: delete the pod Nov 18 07:16:33.476: INFO: Waiting for pod downwardapi-volume-11d6ec29-a096-4001-9e91-e5c0bcb24672 to disappear Nov 18 07:16:33.484: INFO: Pod downwardapi-volume-11d6ec29-a096-4001-9e91-e5c0bcb24672 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:16:33.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1925" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":303,"completed":177,"skipped":2814,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:16:33.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Nov 18 07:16:33.591: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Nov 18 07:17:47.257: INFO: >>> kubeConfig: /root/.kube/config Nov 18 07:18:08.182: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:19:22.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-166" for this suite. • [SLOW TEST:169.420 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":303,"completed":178,"skipped":2815,"failed":0} SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:19:22.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-9f587a1b-635a-492e-b4ed-b5572273b1a2 STEP: Creating a pod to test consume secrets Nov 18 07:19:23.065: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1884d249-93c8-4bd1-856c-8f3aacdde2fa" in namespace "projected-2482" to be "Succeeded or Failed" Nov 18 07:19:23.101: INFO: Pod "pod-projected-secrets-1884d249-93c8-4bd1-856c-8f3aacdde2fa": Phase="Pending", Reason="", readiness=false. Elapsed: 34.971589ms Nov 18 07:19:25.245: INFO: Pod "pod-projected-secrets-1884d249-93c8-4bd1-856c-8f3aacdde2fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.179319287s Nov 18 07:19:27.252: INFO: Pod "pod-projected-secrets-1884d249-93c8-4bd1-856c-8f3aacdde2fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.186719318s STEP: Saw pod success Nov 18 07:19:27.253: INFO: Pod "pod-projected-secrets-1884d249-93c8-4bd1-856c-8f3aacdde2fa" satisfied condition "Succeeded or Failed" Nov 18 07:19:27.257: INFO: Trying to get logs from node leguer-worker2 pod pod-projected-secrets-1884d249-93c8-4bd1-856c-8f3aacdde2fa container projected-secret-volume-test: STEP: delete the pod Nov 18 07:19:27.341: INFO: Waiting for pod pod-projected-secrets-1884d249-93c8-4bd1-856c-8f3aacdde2fa to disappear Nov 18 07:19:27.358: INFO: Pod pod-projected-secrets-1884d249-93c8-4bd1-856c-8f3aacdde2fa no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:19:27.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2482" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":179,"skipped":2817,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:19:27.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium Nov 18 07:19:27.523: INFO: Waiting up to 5m0s for pod "pod-5783d554-2114-441c-8f61-8d8c146cd4c9" in namespace "emptydir-2027" to be "Succeeded or Failed" Nov 18 07:19:27.537: INFO: Pod "pod-5783d554-2114-441c-8f61-8d8c146cd4c9": Phase="Pending", Reason="", readiness=false. Elapsed: 13.897496ms Nov 18 07:19:29.544: INFO: Pod "pod-5783d554-2114-441c-8f61-8d8c146cd4c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020359195s Nov 18 07:19:31.551: INFO: Pod "pod-5783d554-2114-441c-8f61-8d8c146cd4c9": Phase="Running", Reason="", readiness=true. Elapsed: 4.027904026s Nov 18 07:19:33.558: INFO: Pod "pod-5783d554-2114-441c-8f61-8d8c146cd4c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.034927371s STEP: Saw pod success Nov 18 07:19:33.559: INFO: Pod "pod-5783d554-2114-441c-8f61-8d8c146cd4c9" satisfied condition "Succeeded or Failed" Nov 18 07:19:33.564: INFO: Trying to get logs from node leguer-worker2 pod pod-5783d554-2114-441c-8f61-8d8c146cd4c9 container test-container: STEP: delete the pod Nov 18 07:19:33.602: INFO: Waiting for pod pod-5783d554-2114-441c-8f61-8d8c146cd4c9 to disappear Nov 18 07:19:33.631: INFO: Pod pod-5783d554-2114-441c-8f61-8d8c146cd4c9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:19:33.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2027" for this suite. • [SLOW TEST:6.266 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":180,"skipped":2839,"failed":0} [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:19:33.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-328 STEP: creating service affinity-clusterip in namespace services-328 STEP: creating replication controller affinity-clusterip in namespace services-328 I1118 07:19:33.773654 10 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-328, replica count: 3 I1118 07:19:36.825407 10 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1118 07:19:39.826227 10 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 18 07:19:39.841: INFO: Creating new exec pod Nov 18 07:19:44.866: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-328 execpod-affinity6jqxz -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Nov 18 07:19:46.950: INFO: stderr: "I1118 07:19:46.805761 2388 log.go:181] (0x4000614e70) (0x400094caa0) Create stream\nI1118 07:19:46.808654 2388 log.go:181] (0x4000614e70) (0x400094caa0) Stream added, broadcasting: 1\nI1118 07:19:46.827479 2388 log.go:181] (0x4000614e70) Reply frame received for 1\nI1118 07:19:46.828248 2388 log.go:181] (0x4000614e70) (0x4000454a00) Create stream\nI1118 07:19:46.828322 2388 log.go:181] (0x4000614e70) (0x4000454a00) Stream added, broadcasting: 3\nI1118 07:19:46.829717 2388 log.go:181] (0x4000614e70) Reply frame received for 3\nI1118 07:19:46.830049 2388 log.go:181] (0x4000614e70) (0x4000392be0) Create stream\nI1118 07:19:46.830119 2388 log.go:181] (0x4000614e70) (0x4000392be0) Stream added, broadcasting: 5\nI1118 07:19:46.831359 2388 log.go:181] (0x4000614e70) Reply frame received for 5\nI1118 07:19:46.929075 2388 log.go:181] (0x4000614e70) Data frame received for 5\nI1118 07:19:46.929425 2388 log.go:181] (0x4000392be0) (5) Data frame handling\nI1118 07:19:46.930462 2388 log.go:181] (0x4000392be0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip 80\nI1118 07:19:46.932729 2388 log.go:181] (0x4000614e70) Data frame received for 3\nI1118 07:19:46.932971 2388 log.go:181] (0x4000454a00) (3) Data frame handling\nI1118 07:19:46.933315 2388 log.go:181] (0x4000614e70) Data frame received for 1\nI1118 07:19:46.933471 2388 log.go:181] (0x400094caa0) (1) Data frame handling\nI1118 07:19:46.933598 2388 log.go:181] (0x400094caa0) (1) Data frame sent\nI1118 07:19:46.933851 2388 log.go:181] (0x4000614e70) Data frame received for 5\nI1118 07:19:46.933956 2388 log.go:181] (0x4000392be0) (5) Data frame handling\nI1118 07:19:46.934075 2388 log.go:181] (0x4000392be0) (5) Data frame sent\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\nI1118 07:19:46.934231 2388 log.go:181] (0x4000614e70) Data frame received for 5\nI1118 07:19:46.934327 2388 log.go:181] (0x4000392be0) (5) Data frame handling\nI1118 07:19:46.935774 2388 log.go:181] (0x4000614e70) (0x400094caa0) Stream removed, broadcasting: 1\nI1118 07:19:46.938364 2388 log.go:181] (0x4000614e70) Go away received\nI1118 07:19:46.941485 2388 log.go:181] (0x4000614e70) (0x400094caa0) Stream removed, broadcasting: 1\nI1118 07:19:46.941748 2388 log.go:181] (0x4000614e70) (0x4000454a00) Stream removed, broadcasting: 3\nI1118 07:19:46.941935 2388 log.go:181] (0x4000614e70) (0x4000392be0) Stream removed, broadcasting: 5\n" Nov 18 07:19:46.951: INFO: stdout: "" Nov 18 07:19:46.956: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-328 execpod-affinity6jqxz -- /bin/sh -x -c nc -zv -t -w 2 10.103.26.93 80' Nov 18 07:19:48.743: INFO: stderr: "I1118 07:19:48.595551 2408 log.go:181] (0x400063a000) (0x40005c2000) Create stream\nI1118 07:19:48.601507 2408 log.go:181] (0x400063a000) (0x40005c2000) Stream added, broadcasting: 1\nI1118 07:19:48.621111 2408 log.go:181] (0x400063a000) Reply frame received for 1\nI1118 07:19:48.621734 2408 log.go:181] (0x400063a000) (0x40005c20a0) Create stream\nI1118 07:19:48.621798 2408 log.go:181] (0x400063a000) (0x40005c20a0) Stream added, broadcasting: 3\nI1118 07:19:48.623290 2408 log.go:181] (0x400063a000) Reply frame received for 3\nI1118 07:19:48.623575 2408 log.go:181] (0x400063a000) (0x4000970000) Create stream\nI1118 07:19:48.623638 2408 log.go:181] (0x400063a000) (0x4000970000) Stream added, broadcasting: 5\nI1118 07:19:48.624614 2408 log.go:181] (0x400063a000) Reply frame received for 5\nI1118 07:19:48.717738 2408 log.go:181] (0x400063a000) Data frame received for 3\nI1118 07:19:48.718256 2408 log.go:181] (0x40005c20a0) (3) Data frame handling\nI1118 07:19:48.718395 2408 log.go:181] (0x400063a000) Data frame received for 1\nI1118 07:19:48.718491 2408 log.go:181] (0x40005c2000) (1) Data frame handling\nI1118 07:19:48.719144 2408 log.go:181] (0x400063a000) Data frame received for 5\nI1118 07:19:48.719356 2408 log.go:181] (0x4000970000) (5) Data frame handling\nI1118 07:19:48.722267 2408 log.go:181] (0x40005c2000) (1) Data frame sent\n+ nc -zv -t -w 2 10.103.26.93 80\nConnection to 10.103.26.93 80 port [tcp/http] succeeded!\nI1118 07:19:48.724981 2408 log.go:181] (0x4000970000) (5) Data frame sent\nI1118 07:19:48.725127 2408 log.go:181] (0x400063a000) Data frame received for 5\nI1118 07:19:48.725571 2408 log.go:181] (0x400063a000) (0x40005c2000) Stream removed, broadcasting: 1\nI1118 07:19:48.726080 2408 log.go:181] (0x4000970000) (5) Data frame handling\nI1118 07:19:48.726851 2408 log.go:181] (0x400063a000) Go away received\nI1118 07:19:48.729572 2408 log.go:181] (0x400063a000) (0x40005c2000) Stream removed, broadcasting: 1\nI1118 07:19:48.730036 2408 log.go:181] (0x400063a000) (0x40005c20a0) Stream removed, broadcasting: 3\nI1118 07:19:48.730698 2408 log.go:181] (0x400063a000) (0x4000970000) Stream removed, broadcasting: 5\n" Nov 18 07:19:48.744: INFO: stdout: "" Nov 18 07:19:48.745: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-328 execpod-affinity6jqxz -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.103.26.93:80/ ; done' Nov 18 07:19:50.449: INFO: stderr: "I1118 07:19:50.261538 2428 log.go:181] (0x40005f0e70) (0x4000728140) Create stream\nI1118 07:19:50.264069 2428 log.go:181] (0x40005f0e70) (0x4000728140) Stream added, broadcasting: 1\nI1118 07:19:50.275749 2428 log.go:181] (0x40005f0e70) Reply frame received for 1\nI1118 07:19:50.276298 2428 log.go:181] (0x40005f0e70) (0x4000e06000) Create stream\nI1118 07:19:50.276360 2428 log.go:181] (0x40005f0e70) (0x4000e06000) Stream added, broadcasting: 3\nI1118 07:19:50.277864 2428 log.go:181] (0x40005f0e70) Reply frame received for 3\nI1118 07:19:50.278262 2428 log.go:181] (0x40005f0e70) (0x4000e060a0) Create stream\nI1118 07:19:50.278386 2428 log.go:181] (0x40005f0e70) (0x4000e060a0) Stream added, broadcasting: 5\nI1118 07:19:50.279698 2428 log.go:181] (0x40005f0e70) Reply frame received for 5\nI1118 07:19:50.349864 2428 log.go:181] (0x40005f0e70) Data frame received for 5\nI1118 07:19:50.350069 2428 log.go:181] (0x40005f0e70) Data frame received for 3\nI1118 07:19:50.350244 2428 log.go:181] (0x4000e06000) (3) Data frame handling\nI1118 07:19:50.350361 2428 log.go:181] (0x4000e060a0) (5) Data frame handling\nI1118 07:19:50.351356 2428 log.go:181] (0x4000e060a0) (5) Data frame sent\nI1118 07:19:50.351456 2428 log.go:181] (0x4000e06000) (3) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.26.93:80/\nI1118 07:19:50.352346 2428 log.go:181] (0x40005f0e70) Data frame received for 3\nI1118 07:19:50.352410 2428 log.go:181] (0x4000e06000) (3) Data frame handling\nI1118 07:19:50.352490 2428 log.go:181] (0x4000e06000) (3) Data frame sent\nI1118 07:19:50.352795 2428 log.go:181] (0x40005f0e70) Data frame received for 3\nI1118 07:19:50.353010 2428 log.go:181] (0x4000e06000) (3) Data frame handling\nI1118 07:19:50.353108 2428 log.go:181] (0x40005f0e70) Data frame received for 5\nI1118 07:19:50.353206 2428 log.go:181] (0x4000e060a0) (5) Data frame handling\nI1118 07:19:50.353292 2428 log.go:181] (0x4000e060a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.26.93:80/\nI1118 07:19:50.353372 2428 log.go:181] (0x4000e06000) (3) Data frame sent\nI1118 07:19:50.358212 2428 log.go:181] (0x40005f0e70) Data frame received for 3\nI1118 07:19:50.358278 2428 log.go:181] (0x4000e06000) (3) Data frame handling\nI1118 07:19:50.358344 2428 log.go:181] (0x4000e06000) (3) Data frame sent\nI1118 07:19:50.358680 2428 log.go:181] (0x40005f0e70) Data frame received for 5\nI1118 07:19:50.358795 2428 log.go:181] (0x4000e060a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.26.93:80/\nI1118 07:19:50.358886 2428 log.go:181] (0x40005f0e70) Data frame received for 3\nI1118 07:19:50.358982 2428 log.go:181] (0x4000e06000) (3) Data frame handling\nI1118 07:19:50.359072 2428 log.go:181] (0x4000e06000) (3) Data frame sent\nI1118 07:19:50.359146 2428 log.go:181] (0x4000e060a0) (5) Data frame sent\nI1118 07:19:50.362487 2428 log.go:181] (0x40005f0e70) Data frame received for 3\nI1118 07:19:50.362592 2428 log.go:181] (0x4000e06000) (3) Data frame handling\nI1118 07:19:50.362714 2428 log.go:181] (0x4000e06000) (3) Data frame sent\nI1118 07:19:50.365520 2428 log.go:181] (0x40005f0e70) Data frame received for 5\nI1118 07:19:50.365617 2428 log.go:181] (0x40005f0e70) Data frame received for 3\nI1118 07:19:50.365731 2428 log.go:181] (0x4000e06000) (3) Data frame handling\nI1118 07:19:50.365796 2428 log.go:181] (0x4000e060a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.26.93:80/\nI1118 07:19:50.365904 2428 log.go:181] (0x4000e06000) (3) Data frame sent\nI1118 07:19:50.366002 2428 log.go:181] (0x4000e060a0) (5) Data frame sent\nI1118 07:19:50.368029 2428 log.go:181] (0x40005f0e70) Data frame received for 3\nI1118 07:19:50.368112 2428 log.go:181] (0x4000e06000) (3) Data frame handling\nI1118 07:19:50.368212 2428 log.go:181] (0x4000e06000) (3) Data frame sent\nI1118 07:19:50.368701 2428 log.go:181] (0x40005f0e70) Data frame received for 3\nI1118 07:19:50.368927 2428 log.go:181] (0x4000e06000) (3) Data frame handling\nI1118 07:19:50.369061 2428 log.go:181] (0x4000e06000) (3) Data frame sent\nI1118 07:19:50.369178 2428 log.go:181] (0x40005f0e70) Data frame received for 5\nI1118 07:19:50.369266 2428 log.go:181] (0x4000e060a0) (5) Data frame handling\nI1118 07:19:50.369364 2428 log.go:181] (0x4000e060a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.26.93:80/\nI1118 07:19:50.373687 2428 log.go:181] (0x40005f0e70) Data frame received for 3\nI1118 07:19:50.373779 2428 log.go:181] (0x4000e06000) (3) Data frame handling\nI1118 07:19:50.373890 2428 log.go:181] (0x4000e06000) (3) Data frame sent\nI1118 07:19:50.374607 2428 log.go:181] (0x40005f0e70) Data frame received for 3\nI1118 07:19:50.374717 2428 log.go:181] (0x4000e06000) (3) Data frame handling\nI1118 07:19:50.374800 2428 log.go:181] (0x40005f0e70) Data frame received for 5\nI1118 07:19:50.374887 2428 log.go:181] (0x4000e060a0) (5) Data frame handling\nI1118 07:19:50.374961 2428 log.go:181] (0x4000e060a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.26.93:80/\nI1118 07:19:50.375024 2428 log.go:181] (0x4000e06000) (3) Data frame sent\nI1118 07:19:50.381240 2428 log.go:181] (0x40005f0e70) Data frame received for 3\nI1118 07:19:50.381333 2428 log.go:181] (0x4000e06000) (3) Data frame handling\nI1118 07:19:50.381461 2428 log.go:181] (0x4000e06000) (3) Data frame sent\nI1118 07:19:50.381749 2428 log.go:181] (0x40005f0e70) Data frame received for 3\nI1118 07:19:50.381813 2428 log.go:181] (0x4000e06000) (3) Data frame handling\nI1118 07:19:50.381908 2428 log.go:181] (0x4000e06000) (3) Data frame sent\nI1118 07:19:50.381978 2428 log.go:181] (0x40005f0e70) Data frame received for 5\nI1118 07:19:50.382044 2428 log.go:181] (0x4000e060a0) (5) Data frame handling\nI1118 07:19:50.382122 2428 log.go:181] (0x4000e060a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.26.93:80/\nI1118 07:19:50.385498 2428 log.go:181] (0x40005f0e70) Data frame received for 3\nI1118 07:19:50.385596 2428 log.go:181] (0x4000e06000) (3) Data frame handling\nI1118 07:19:50.385679 2428 log.go:181] (0x4000e06000) (3) Data frame sent\nI1118 07:19:50.385761 2428 log.go:181] (0x40005f0e70) Data frame received for 3\nI1118 07:19:50.385831 2428 log.go:181] (0x4000e06000) (3) Data frame handling\nI1118 07:19:50.385920 2428 log.go:181] (0x40005f0e70) Data frame received for 5\nI1118 07:19:50.385996 2428 log.go:181] (0x4000e060a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.26.93:80/\nI1118 07:19:50.386084 2428 log.go:181] (0x4000e06000) (3) Data frame sent\nI1118 07:19:50.386170 2428 log.go:181] (0x4000e060a0) (5) Data frame sent\nI1118 07:19:50.388578 2428 log.go:181] (0x40005f0e70) Data frame received for 3\nI1118 07:19:50.388679 2428 log.go:181] (0x4000e06000) (3) Data frame handling\nI1118 07:19:50.388912 2428 log.go:181] (0x4000e06000) (3) Data frame sent\nI1118 07:19:50.389116 2428 log.go:181] (0x40005f0e70) Data frame received for 3\nI1118 07:19:50.389200 2428 log.go:181] (0x4000e06000) (3) Data frame handling\nI1118 07:19:50.389286 2428 log.go:181] (0x40005f0e70) Data frame received for 5\nI1118 07:19:50.389375 2428 log.go:181] (0x4000e060a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.26.93:80/\nI1118 07:19:50.389446 2428 log.go:181] (0x4000e06000) (3) Data frame sent\nI1118 07:19:50.389531 2428 log.go:181] (0x4000e060a0) (5) Data frame sent\nI1118 07:19:50.395295 2428 log.go:181] (0x40005f0e70) Data frame received for 3\nI1118 07:19:50.395387 2428 log.go:181] (0x4000e06000) (3) Data frame handling\nI1118 07:19:50.395519 2428 log.go:181] (0x4000e06000) (3) Data frame sent\nI1118 07:19:50.396152 2428 log.go:181] (0x40005f0e70) Data frame received for 5\nI1118 07:19:50.396261 2428 log.go:181] (0x4000e060a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.26.93:80/\nI1118 07:19:50.396374 2428 log.go:181] (0x40005f0e70) Data frame received for 3\nI1118 07:19:50.396486 2428 log.go:181] (0x4000e06000) (3) Data frame handling\nI1118 07:19:50.396587 2428 log.go:181] (0x4000e060a0) (5) Data frame sent\nI1118 07:19:50.396703 2428 log.go:181] (0x4000e06000) (3) Data frame sent\nI1118 07:19:50.401114 2428 log.go:181] (0x40005f0e70) Data frame received for 3\nI1118 07:19:50.401266 2428 log.go:181] (0x4000e06000) (3) Data frame handling\nI1118 07:19:50.401393 2428 log.go:181] (0x4000e06000) (3) Data frame sent\nI1118 07:19:50.401520 2428 log.go:181] (0x40005f0e70) Data frame received for 3\nI1118 07:19:50.401593 2428 log.go:181] (0x4000e06000) (3) Data frame handling\nI1118 07:19:50.401658 2428 log.go:181] (0x4000e06000) (3) Data frame sent\nI1118 07:19:50.401717 2428 log.go:181] (0x40005f0e70) Data frame received for 5\nI1118 07:19:50.401772 2428 log.go:181] (0x4000e060a0) (5) Data frame handling\nI1118 07:19:50.401842 2428 log.go:181] (0x4000e060a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.26.93:80/\nI1118 07:19:50.405427 2428 log.go:181] (0x40005f0e70) Data frame received for 3\nI1118 07:19:50.405567 2428 log.go:181] (0x4000e06000) (3) Data frame handling\nI1118 07:19:50.405703 2428 log.go:181] (0x4000e06000) (3) Data frame sent\nI1118 07:19:50.406250 2428 log.go:181] (0x40005f0e70) Data frame received for 5\nI1118 07:19:50.406328 2428 log.go:181] (0x4000e060a0) (5) Data frame handling\nI1118 07:19:50.406388 2428 log.go:181] (0x4000e060a0) (5) Data frame sent\nI1118 07:19:50.406440 2428 log.go:181] (0x40005f0e70) Data frame received for 5\n+ echo\n+ curl -q -s --connect-timeoutI1118 07:19:50.406492 2428 log.go:181] (0x4000e060a0) (5) Data frame handling\nI1118 07:19:50.406546 2428 log.go:181] (0x4000e060a0) (5) Data frame sent\nI1118 07:19:50.406601 2428 log.go:181] (0x40005f0e70) Data frame received for 3\n 2 http://10.103.26.93:80/\nI1118 07:19:50.406651 2428 log.go:181] (0x4000e06000) (3) Data frame handling\nI1118 07:19:50.406706 2428 log.go:181] (0x4000e06000) (3) Data frame sent\nI1118 07:19:50.412165 2428 log.go:181] (0x40005f0e70) Data frame received for 3\nI1118 07:19:50.412238 2428 log.go:181] (0x4000e06000) (3) Data frame handling\nI1118 07:19:50.412318 2428 log.go:181] (0x4000e06000) (3) Data frame sent\nI1118 07:19:50.413215 2428 log.go:181] (0x40005f0e70) Data frame received for 3\nI1118 07:19:50.413321 2428 log.go:181] (0x4000e06000) (3) Data frame handling\nI1118 07:19:50.413435 2428 log.go:181] (0x40005f0e70) Data frame received for 5\nI1118 07:19:50.413568 2428 log.go:181] (0x4000e060a0) (5) Data frame handling\nI1118 07:19:50.413661 2428 log.go:181] (0x4000e060a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.26.93:80/\nI1118 07:19:50.413733 2428 log.go:181] (0x4000e06000) (3) Data frame sent\nI1118 07:19:50.417838 2428 log.go:181] (0x40005f0e70) Data frame received for 3\nI1118 07:19:50.417938 2428 log.go:181] (0x4000e06000) (3) Data frame handling\nI1118 07:19:50.418051 2428 log.go:181] (0x4000e06000) (3) Data frame sent\nI1118 07:19:50.418415 2428 log.go:181] (0x40005f0e70) Data frame received for 5\nI1118 07:19:50.418485 2428 log.go:181] (0x4000e060a0) (5) Data frame handling\nI1118 07:19:50.418572 2428 log.go:181] (0x4000e060a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.26.93:80/\nI1118 07:19:50.418835 2428 log.go:181] (0x40005f0e70) Data frame received for 3\nI1118 07:19:50.418980 2428 log.go:181] (0x4000e06000) (3) Data frame handling\nI1118 07:19:50.419163 2428 log.go:181] (0x4000e06000) (3) Data frame sent\nI1118 07:19:50.421868 2428 log.go:181] (0x40005f0e70) Data frame received for 3\nI1118 07:19:50.421978 2428 log.go:181] (0x4000e06000) (3) Data frame handling\nI1118 07:19:50.422077 2428 log.go:181] (0x4000e06000) (3) Data frame sent\nI1118 07:19:50.422635 2428 log.go:181] (0x40005f0e70) Data frame received for 5\nI1118 07:19:50.422721 2428 log.go:181] (0x4000e060a0) (5) Data frame handling\nI1118 07:19:50.422786 2428 log.go:181] (0x4000e060a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.26.93:80/\nI1118 07:19:50.422877 2428 log.go:181] (0x40005f0e70) Data frame received for 3\nI1118 07:19:50.422988 2428 log.go:181] (0x4000e06000) (3) Data frame handling\nI1118 07:19:50.423111 2428 log.go:181] (0x4000e06000) (3) Data frame sent\nI1118 07:19:50.426882 2428 log.go:181] (0x40005f0e70) Data frame received for 3\nI1118 07:19:50.426976 2428 log.go:181] (0x4000e06000) (3) Data frame handling\nI1118 07:19:50.427064 2428 log.go:181] (0x4000e06000) (3) Data frame sent\nI1118 07:19:50.427644 2428 log.go:181] (0x40005f0e70) Data frame received for 3\nI1118 07:19:50.427788 2428 log.go:181] (0x4000e06000) (3) Data frame handling\nI1118 07:19:50.427909 2428 log.go:181] (0x40005f0e70) Data frame received for 5\nI1118 07:19:50.428049 2428 log.go:181] (0x4000e060a0) (5) Data frame handling\nI1118 07:19:50.428174 2428 log.go:181] (0x4000e060a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.103.26.93:80/\nI1118 07:19:50.428298 2428 log.go:181] (0x4000e06000) (3) Data frame sent\nI1118 07:19:50.430673 2428 log.go:181] (0x40005f0e70) Data frame received for 3\nI1118 07:19:50.430766 2428 log.go:181] (0x4000e06000) (3) Data frame handling\nI1118 07:19:50.430882 2428 log.go:181] (0x4000e06000) (3) Data frame sent\nI1118 07:19:50.431336 2428 log.go:181] (0x40005f0e70) Data frame received for 5\nI1118 07:19:50.431432 2428 log.go:181] (0x4000e060a0) (5) Data frame handling\nI1118 07:19:50.431645 2428 log.go:181] (0x40005f0e70) Data frame received for 3\nI1118 07:19:50.431732 2428 log.go:181] (0x4000e06000) (3) Data frame handling\nI1118 07:19:50.433173 2428 log.go:181] (0x40005f0e70) Data frame received for 1\nI1118 07:19:50.433253 2428 log.go:181] (0x4000728140) (1) Data frame handling\nI1118 07:19:50.433336 2428 log.go:181] (0x4000728140) (1) Data frame sent\nI1118 07:19:50.434370 2428 log.go:181] (0x40005f0e70) (0x4000728140) Stream removed, broadcasting: 1\nI1118 07:19:50.436560 2428 log.go:181] (0x40005f0e70) Go away received\nI1118 07:19:50.439939 2428 log.go:181] (0x40005f0e70) (0x4000728140) Stream removed, broadcasting: 1\nI1118 07:19:50.440249 2428 log.go:181] (0x40005f0e70) (0x4000e06000) Stream removed, broadcasting: 3\nI1118 07:19:50.440484 2428 log.go:181] (0x40005f0e70) (0x4000e060a0) Stream removed, broadcasting: 5\n" Nov 18 07:19:50.452: INFO: stdout: "\naffinity-clusterip-6pg9g\naffinity-clusterip-6pg9g\naffinity-clusterip-6pg9g\naffinity-clusterip-6pg9g\naffinity-clusterip-6pg9g\naffinity-clusterip-6pg9g\naffinity-clusterip-6pg9g\naffinity-clusterip-6pg9g\naffinity-clusterip-6pg9g\naffinity-clusterip-6pg9g\naffinity-clusterip-6pg9g\naffinity-clusterip-6pg9g\naffinity-clusterip-6pg9g\naffinity-clusterip-6pg9g\naffinity-clusterip-6pg9g\naffinity-clusterip-6pg9g" Nov 18 07:19:50.452: INFO: Received response from host: affinity-clusterip-6pg9g Nov 18 07:19:50.452: INFO: Received response from host: affinity-clusterip-6pg9g Nov 18 07:19:50.452: INFO: Received response from host: affinity-clusterip-6pg9g Nov 18 07:19:50.452: INFO: Received response from host: affinity-clusterip-6pg9g Nov 18 07:19:50.452: INFO: Received response from host: affinity-clusterip-6pg9g Nov 18 07:19:50.452: INFO: Received response from host: affinity-clusterip-6pg9g Nov 18 07:19:50.452: INFO: Received response from host: affinity-clusterip-6pg9g Nov 18 07:19:50.452: INFO: Received response from host: affinity-clusterip-6pg9g Nov 18 07:19:50.453: INFO: Received response from host: affinity-clusterip-6pg9g Nov 18 07:19:50.453: INFO: Received response from host: affinity-clusterip-6pg9g Nov 18 07:19:50.453: INFO: Received response from host: affinity-clusterip-6pg9g Nov 18 07:19:50.453: INFO: Received response from host: affinity-clusterip-6pg9g Nov 18 07:19:50.453: INFO: Received response from host: affinity-clusterip-6pg9g Nov 18 07:19:50.453: INFO: Received response from host: affinity-clusterip-6pg9g Nov 18 07:19:50.453: INFO: Received response from host: affinity-clusterip-6pg9g Nov 18 07:19:50.453: INFO: Received response from host: affinity-clusterip-6pg9g Nov 18 07:19:50.453: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-328, will wait for the garbage collector to delete the pods Nov 18 07:19:50.585: INFO: Deleting ReplicationController affinity-clusterip took: 8.305901ms Nov 18 07:19:51.186: INFO: Terminating ReplicationController affinity-clusterip pods took: 600.734538ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:20:00.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-328" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:26.795 seconds] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":303,"completed":181,"skipped":2839,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:20:00.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-090a44f6-1d0b-4b43-b2e0-8ce283d49343 in namespace container-probe-235 Nov 18 07:20:04.661: INFO: Started pod liveness-090a44f6-1d0b-4b43-b2e0-8ce283d49343 in namespace container-probe-235 STEP: checking the pod's current state and verifying that restartCount is present Nov 18 07:20:04.667: INFO: Initial restart count of pod liveness-090a44f6-1d0b-4b43-b2e0-8ce283d49343 is 0 Nov 18 07:20:20.891: INFO: Restart count of pod container-probe-235/liveness-090a44f6-1d0b-4b43-b2e0-8ce283d49343 is now 1 (16.224083192s elapsed) Nov 18 07:20:40.964: INFO: Restart count of pod container-probe-235/liveness-090a44f6-1d0b-4b43-b2e0-8ce283d49343 is now 2 (36.29700181s elapsed) Nov 18 07:21:01.042: INFO: Restart count of pod container-probe-235/liveness-090a44f6-1d0b-4b43-b2e0-8ce283d49343 is now 3 (56.375343968s elapsed) Nov 18 07:21:21.137: INFO: Restart count of pod container-probe-235/liveness-090a44f6-1d0b-4b43-b2e0-8ce283d49343 is now 4 (1m16.470382341s elapsed) Nov 18 07:22:19.870: INFO: Restart count of pod container-probe-235/liveness-090a44f6-1d0b-4b43-b2e0-8ce283d49343 is now 5 (2m15.2029177s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:22:19.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-235" for this suite. • [SLOW TEST:139.469 seconds] [k8s.io] Probing container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":303,"completed":182,"skipped":2857,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:22:19.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 18 07:22:20.015: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:22:27.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4104" for this suite. • [SLOW TEST:7.246 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":303,"completed":183,"skipped":2863,"failed":0} S ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:22:27.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:22:27.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-2227" for this suite. STEP: Destroying namespace "nspatchtest-78a7724c-0ec7-45aa-8bd3-978fc9220b9d-2402" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":303,"completed":184,"skipped":2864,"failed":0} SS ------------------------------ [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-instrumentation] Events API /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:22:27.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing events in all namespaces STEP: listing events in test namespace STEP: listing events with field selection filtering on source STEP: listing events with field selection filtering on reportingController STEP: getting the test event STEP: patching the test event STEP: getting the test event STEP: updating the test event STEP: getting the test event STEP: deleting the test event STEP: listing events in all namespaces STEP: listing events in test namespace [AfterEach] [sig-instrumentation] Events API /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:22:27.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-2680" for this suite. •{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":303,"completed":185,"skipped":2866,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:22:27.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating cluster-info Nov 18 07:22:27.793: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config cluster-info' Nov 18 07:22:29.179: INFO: stderr: "" Nov 18 07:22:29.179: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:43573\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:43573/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:22:29.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7444" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":303,"completed":186,"skipped":2887,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:22:29.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:22:40.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4185" for this suite. • [SLOW TEST:11.207 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":303,"completed":187,"skipped":2889,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:22:40.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs Nov 18 07:22:40.538: INFO: Waiting up to 5m0s for pod "pod-265b8982-ceaa-4401-be75-ab16605ede24" in namespace "emptydir-2755" to be "Succeeded or Failed" Nov 18 07:22:40.542: INFO: Pod "pod-265b8982-ceaa-4401-be75-ab16605ede24": Phase="Pending", Reason="", readiness=false. Elapsed: 4.438638ms Nov 18 07:22:42.604: INFO: Pod "pod-265b8982-ceaa-4401-be75-ab16605ede24": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066002244s Nov 18 07:22:44.610: INFO: Pod "pod-265b8982-ceaa-4401-be75-ab16605ede24": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.072426971s STEP: Saw pod success Nov 18 07:22:44.610: INFO: Pod "pod-265b8982-ceaa-4401-be75-ab16605ede24" satisfied condition "Succeeded or Failed" Nov 18 07:22:44.615: INFO: Trying to get logs from node leguer-worker2 pod pod-265b8982-ceaa-4401-be75-ab16605ede24 container test-container: STEP: delete the pod Nov 18 07:22:44.649: INFO: Waiting for pod pod-265b8982-ceaa-4401-be75-ab16605ede24 to disappear Nov 18 07:22:44.653: INFO: Pod pod-265b8982-ceaa-4401-be75-ab16605ede24 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:22:44.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2755" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":188,"skipped":2919,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:22:44.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 18 07:22:50.897: INFO: Waiting up to 5m0s for pod "client-envvars-97993801-1016-4a8e-aa3c-0fedf17ba487" in namespace "pods-1223" to be "Succeeded or Failed" Nov 18 07:22:50.906: INFO: Pod "client-envvars-97993801-1016-4a8e-aa3c-0fedf17ba487": Phase="Pending", Reason="", readiness=false. Elapsed: 9.146614ms Nov 18 07:22:52.915: INFO: Pod "client-envvars-97993801-1016-4a8e-aa3c-0fedf17ba487": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01751881s Nov 18 07:22:54.923: INFO: Pod "client-envvars-97993801-1016-4a8e-aa3c-0fedf17ba487": Phase="Running", Reason="", readiness=true. Elapsed: 4.025237336s Nov 18 07:22:56.931: INFO: Pod "client-envvars-97993801-1016-4a8e-aa3c-0fedf17ba487": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.033583133s STEP: Saw pod success Nov 18 07:22:56.931: INFO: Pod "client-envvars-97993801-1016-4a8e-aa3c-0fedf17ba487" satisfied condition "Succeeded or Failed" Nov 18 07:22:56.937: INFO: Trying to get logs from node leguer-worker2 pod client-envvars-97993801-1016-4a8e-aa3c-0fedf17ba487 container env3cont: STEP: delete the pod Nov 18 07:22:57.042: INFO: Waiting for pod client-envvars-97993801-1016-4a8e-aa3c-0fedf17ba487 to disappear Nov 18 07:22:57.052: INFO: Pod client-envvars-97993801-1016-4a8e-aa3c-0fedf17ba487 no longer exists [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:22:57.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1223" for this suite. • [SLOW TEST:12.377 seconds] [k8s.io] Pods /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":303,"completed":189,"skipped":2941,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:22:57.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Nov 18 07:22:57.139: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ca552310-a51e-47b7-8a47-1051802ba75f" in namespace "downward-api-3897" to be "Succeeded or Failed" Nov 18 07:22:57.179: INFO: Pod "downwardapi-volume-ca552310-a51e-47b7-8a47-1051802ba75f": Phase="Pending", Reason="", readiness=false. Elapsed: 39.92966ms Nov 18 07:22:59.224: INFO: Pod "downwardapi-volume-ca552310-a51e-47b7-8a47-1051802ba75f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085453327s Nov 18 07:23:01.232: INFO: Pod "downwardapi-volume-ca552310-a51e-47b7-8a47-1051802ba75f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.093053389s STEP: Saw pod success Nov 18 07:23:01.232: INFO: Pod "downwardapi-volume-ca552310-a51e-47b7-8a47-1051802ba75f" satisfied condition "Succeeded or Failed" Nov 18 07:23:01.238: INFO: Trying to get logs from node leguer-worker2 pod downwardapi-volume-ca552310-a51e-47b7-8a47-1051802ba75f container client-container: STEP: delete the pod Nov 18 07:23:01.401: INFO: Waiting for pod downwardapi-volume-ca552310-a51e-47b7-8a47-1051802ba75f to disappear Nov 18 07:23:01.457: INFO: Pod downwardapi-volume-ca552310-a51e-47b7-8a47-1051802ba75f no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:23:01.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3897" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":190,"skipped":2986,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:23:01.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Nov 18 07:23:01.555: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5a33e8cc-3ed1-452d-8b18-6cf64b5db9d4" in namespace "projected-4008" to be "Succeeded or Failed" Nov 18 07:23:01.624: INFO: Pod "downwardapi-volume-5a33e8cc-3ed1-452d-8b18-6cf64b5db9d4": Phase="Pending", Reason="", readiness=false. Elapsed: 69.044485ms Nov 18 07:23:03.633: INFO: Pod "downwardapi-volume-5a33e8cc-3ed1-452d-8b18-6cf64b5db9d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078104924s Nov 18 07:23:05.641: INFO: Pod "downwardapi-volume-5a33e8cc-3ed1-452d-8b18-6cf64b5db9d4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085987594s Nov 18 07:23:07.649: INFO: Pod "downwardapi-volume-5a33e8cc-3ed1-452d-8b18-6cf64b5db9d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.094646834s STEP: Saw pod success Nov 18 07:23:07.650: INFO: Pod "downwardapi-volume-5a33e8cc-3ed1-452d-8b18-6cf64b5db9d4" satisfied condition "Succeeded or Failed" Nov 18 07:23:07.657: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-5a33e8cc-3ed1-452d-8b18-6cf64b5db9d4 container client-container: STEP: delete the pod Nov 18 07:23:07.710: INFO: Waiting for pod downwardapi-volume-5a33e8cc-3ed1-452d-8b18-6cf64b5db9d4 to disappear Nov 18 07:23:07.723: INFO: Pod downwardapi-volume-5a33e8cc-3ed1-452d-8b18-6cf64b5db9d4 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:23:07.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4008" for this suite. • [SLOW TEST:6.278 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":303,"completed":191,"skipped":3021,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:23:07.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override arguments Nov 18 07:23:07.956: INFO: Waiting up to 5m0s for pod "client-containers-fa7a3a3f-04bc-4ad0-9cb6-e209ffc4a46b" in namespace "containers-6662" to be "Succeeded or Failed" Nov 18 07:23:08.047: INFO: Pod "client-containers-fa7a3a3f-04bc-4ad0-9cb6-e209ffc4a46b": Phase="Pending", Reason="", readiness=false. Elapsed: 90.523414ms Nov 18 07:23:10.185: INFO: Pod "client-containers-fa7a3a3f-04bc-4ad0-9cb6-e209ffc4a46b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.227985071s Nov 18 07:23:12.191: INFO: Pod "client-containers-fa7a3a3f-04bc-4ad0-9cb6-e209ffc4a46b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.234245953s STEP: Saw pod success Nov 18 07:23:12.191: INFO: Pod "client-containers-fa7a3a3f-04bc-4ad0-9cb6-e209ffc4a46b" satisfied condition "Succeeded or Failed" Nov 18 07:23:12.196: INFO: Trying to get logs from node leguer-worker2 pod client-containers-fa7a3a3f-04bc-4ad0-9cb6-e209ffc4a46b container test-container: STEP: delete the pod Nov 18 07:23:12.486: INFO: Waiting for pod client-containers-fa7a3a3f-04bc-4ad0-9cb6-e209ffc4a46b to disappear Nov 18 07:23:12.502: INFO: Pod client-containers-fa7a3a3f-04bc-4ad0-9cb6-e209ffc4a46b no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:23:12.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6662" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":303,"completed":192,"skipped":3031,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:23:12.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:23:12.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-3081" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":303,"completed":193,"skipped":3043,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:23:12.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Nov 18 07:23:12.862: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 18 07:23:12.887: INFO: Waiting for terminating namespaces to be deleted... Nov 18 07:23:12.891: INFO: Logging pods the apiserver thinks is on node leguer-worker before test Nov 18 07:23:12.898: INFO: kindnet-lc95n from kube-system started at 2020-10-04 09:51:31 +0000 UTC (1 container statuses recorded) Nov 18 07:23:12.898: INFO: Container kindnet-cni ready: true, restart count 1 Nov 18 07:23:12.898: INFO: kube-proxy-bmzvg from kube-system started at 2020-10-04 09:51:31 +0000 UTC (1 container statuses recorded) Nov 18 07:23:12.898: INFO: Container kube-proxy ready: true, restart count 0 Nov 18 07:23:12.899: INFO: Logging pods the apiserver thinks is on node leguer-worker2 before test Nov 18 07:23:12.954: INFO: kindnet-nffr7 from kube-system started at 2020-10-04 09:51:31 +0000 UTC (1 container statuses recorded) Nov 18 07:23:12.954: INFO: Container kindnet-cni ready: true, restart count 1 Nov 18 07:23:12.954: INFO: kube-proxy-sxhc5 from kube-system started at 2020-10-04 09:51:30 +0000 UTC (1 container statuses recorded) Nov 18 07:23:12.954: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-63ab6e03-35d0-4727-91fb-a423ad243047 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-63ab6e03-35d0-4727-91fb-a423ad243047 off the node leguer-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-63ab6e03-35d0-4727-91fb-a423ad243047 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:23:21.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7250" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.455 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":303,"completed":194,"skipped":3056,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:23:21.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on tmpfs Nov 18 07:23:21.296: INFO: Waiting up to 5m0s for pod "pod-becd15d9-1739-4db6-8bab-d1aceed9e567" in namespace "emptydir-6085" to be "Succeeded or Failed" Nov 18 07:23:21.311: INFO: Pod "pod-becd15d9-1739-4db6-8bab-d1aceed9e567": Phase="Pending", Reason="", readiness=false. Elapsed: 14.720936ms Nov 18 07:23:23.515: INFO: Pod "pod-becd15d9-1739-4db6-8bab-d1aceed9e567": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218906395s Nov 18 07:23:25.522: INFO: Pod "pod-becd15d9-1739-4db6-8bab-d1aceed9e567": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.225843874s STEP: Saw pod success Nov 18 07:23:25.522: INFO: Pod "pod-becd15d9-1739-4db6-8bab-d1aceed9e567" satisfied condition "Succeeded or Failed" Nov 18 07:23:25.527: INFO: Trying to get logs from node leguer-worker2 pod pod-becd15d9-1739-4db6-8bab-d1aceed9e567 container test-container: STEP: delete the pod Nov 18 07:23:25.561: INFO: Waiting for pod pod-becd15d9-1739-4db6-8bab-d1aceed9e567 to disappear Nov 18 07:23:25.574: INFO: Pod pod-becd15d9-1739-4db6-8bab-d1aceed9e567 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:23:25.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6085" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":195,"skipped":3113,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should test the lifecycle of an Endpoint [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:23:25.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should test the lifecycle of an Endpoint [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating an Endpoint STEP: waiting for available Endpoint STEP: listing all Endpoints STEP: updating the Endpoint STEP: fetching the Endpoint STEP: patching the Endpoint STEP: fetching the Endpoint STEP: deleting the Endpoint by Collection STEP: waiting for Endpoint deletion STEP: fetching the Endpoint [AfterEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:23:25.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-882" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 •{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":303,"completed":196,"skipped":3143,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:23:25.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 18 07:23:29.978: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 18 07:23:32.001: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741281009, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741281009, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741281010, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741281009, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 18 07:23:34.009: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741281009, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741281009, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741281010, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741281009, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 18 07:23:37.060: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:23:37.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8530" for this suite. STEP: Destroying namespace "webhook-8530-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:11.479 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":303,"completed":197,"skipped":3152,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:23:37.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from NodePort to ExternalName [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service nodeport-service with the type=NodePort in namespace services-360 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-360 STEP: creating replication controller externalsvc in namespace services-360 I1118 07:23:37.533596 10 runners.go:190] Created replication controller with name: externalsvc, namespace: services-360, replica count: 2 I1118 07:23:40.585436 10 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1118 07:23:43.586276 10 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Nov 18 07:23:43.669: INFO: Creating new exec pod Nov 18 07:23:47.703: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-360 execpodj8l6c -- /bin/sh -x -c nslookup nodeport-service.services-360.svc.cluster.local' Nov 18 07:23:52.460: INFO: stderr: "I1118 07:23:52.301906 2468 log.go:181] (0x400003a0b0) (0x40005b5540) Create stream\nI1118 07:23:52.305609 2468 log.go:181] (0x400003a0b0) (0x40005b5540) Stream added, broadcasting: 1\nI1118 07:23:52.317338 2468 log.go:181] (0x400003a0b0) Reply frame received for 1\nI1118 07:23:52.317882 2468 log.go:181] (0x400003a0b0) (0x4000fc8320) Create stream\nI1118 07:23:52.317936 2468 log.go:181] (0x400003a0b0) (0x4000fc8320) Stream added, broadcasting: 3\nI1118 07:23:52.319584 2468 log.go:181] (0x400003a0b0) Reply frame received for 3\nI1118 07:23:52.320037 2468 log.go:181] (0x400003a0b0) (0x40007b4000) Create stream\nI1118 07:23:52.320131 2468 log.go:181] (0x400003a0b0) (0x40007b4000) Stream added, broadcasting: 5\nI1118 07:23:52.321936 2468 log.go:181] (0x400003a0b0) Reply frame received for 5\nI1118 07:23:52.425720 2468 log.go:181] (0x400003a0b0) Data frame received for 5\nI1118 07:23:52.426103 2468 log.go:181] (0x40007b4000) (5) Data frame handling\nI1118 07:23:52.426709 2468 log.go:181] (0x40007b4000) (5) Data frame sent\n+ nslookup nodeport-service.services-360.svc.cluster.local\nI1118 07:23:52.435574 2468 log.go:181] (0x400003a0b0) Data frame received for 3\nI1118 07:23:52.435744 2468 log.go:181] (0x4000fc8320) (3) Data frame handling\nI1118 07:23:52.435895 2468 log.go:181] (0x4000fc8320) (3) Data frame sent\nI1118 07:23:52.437349 2468 log.go:181] (0x400003a0b0) Data frame received for 3\nI1118 07:23:52.437527 2468 log.go:181] (0x4000fc8320) (3) Data frame handling\nI1118 07:23:52.437682 2468 log.go:181] (0x4000fc8320) (3) Data frame sent\nI1118 07:23:52.437885 2468 log.go:181] (0x400003a0b0) Data frame received for 5\nI1118 07:23:52.438093 2468 log.go:181] (0x40007b4000) (5) Data frame handling\nI1118 07:23:52.438423 2468 log.go:181] (0x400003a0b0) Data frame received for 3\nI1118 07:23:52.438609 2468 log.go:181] (0x4000fc8320) (3) Data frame handling\nI1118 07:23:52.441990 2468 log.go:181] (0x400003a0b0) Data frame received for 1\nI1118 07:23:52.442107 2468 log.go:181] (0x40005b5540) (1) Data frame handling\nI1118 07:23:52.442218 2468 log.go:181] (0x40005b5540) (1) Data frame sent\nI1118 07:23:52.445013 2468 log.go:181] (0x400003a0b0) (0x40005b5540) Stream removed, broadcasting: 1\nI1118 07:23:52.446679 2468 log.go:181] (0x400003a0b0) Go away received\nI1118 07:23:52.450011 2468 log.go:181] (0x400003a0b0) (0x40005b5540) Stream removed, broadcasting: 1\nI1118 07:23:52.450355 2468 log.go:181] (0x400003a0b0) (0x4000fc8320) Stream removed, broadcasting: 3\nI1118 07:23:52.450804 2468 log.go:181] (0x400003a0b0) (0x40007b4000) Stream removed, broadcasting: 5\n" Nov 18 07:23:52.461: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-360.svc.cluster.local\tcanonical name = externalsvc.services-360.svc.cluster.local.\nName:\texternalsvc.services-360.svc.cluster.local\nAddress: 10.98.66.162\n\n" STEP: deleting ReplicationController externalsvc in namespace services-360, will wait for the garbage collector to delete the pods Nov 18 07:23:52.527: INFO: Deleting ReplicationController externalsvc took: 8.795282ms Nov 18 07:23:52.628: INFO: Terminating ReplicationController externalsvc pods took: 100.746905ms Nov 18 07:23:59.625: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:23:59.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-360" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:22.423 seconds] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":303,"completed":198,"skipped":3162,"failed":0} [sig-network] Services should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:23:59.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service endpoint-test2 in namespace services-2734 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2734 to expose endpoints map[] Nov 18 07:23:59.902: INFO: successfully validated that service endpoint-test2 in namespace services-2734 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-2734 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2734 to expose endpoints map[pod1:[80]] Nov 18 07:24:04.041: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]], will retry Nov 18 07:24:05.149: INFO: successfully validated that service endpoint-test2 in namespace services-2734 exposes endpoints map[pod1:[80]] STEP: Creating pod pod2 in namespace services-2734 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2734 to expose endpoints map[pod1:[80] pod2:[80]] Nov 18 07:24:09.419: INFO: successfully validated that service endpoint-test2 in namespace services-2734 exposes endpoints map[pod1:[80] pod2:[80]] STEP: Deleting pod pod1 in namespace services-2734 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2734 to expose endpoints map[pod2:[80]] Nov 18 07:24:09.474: INFO: successfully validated that service endpoint-test2 in namespace services-2734 exposes endpoints map[pod2:[80]] STEP: Deleting pod pod2 in namespace services-2734 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2734 to expose endpoints map[] Nov 18 07:24:10.710: INFO: successfully validated that service endpoint-test2 in namespace services-2734 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:24:10.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2734" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:11.063 seconds] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":303,"completed":199,"skipped":3162,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:24:10.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 18 07:24:10.842: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-01e93e99-dd36-4e8f-b58e-811a8384c1a0" in namespace "security-context-test-7992" to be "Succeeded or Failed" Nov 18 07:24:10.850: INFO: Pod "busybox-privileged-false-01e93e99-dd36-4e8f-b58e-811a8384c1a0": Phase="Pending", Reason="", readiness=false. Elapsed: 7.874545ms Nov 18 07:24:12.857: INFO: Pod "busybox-privileged-false-01e93e99-dd36-4e8f-b58e-811a8384c1a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01466175s Nov 18 07:24:14.864: INFO: Pod "busybox-privileged-false-01e93e99-dd36-4e8f-b58e-811a8384c1a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021621402s Nov 18 07:24:14.864: INFO: Pod "busybox-privileged-false-01e93e99-dd36-4e8f-b58e-811a8384c1a0" satisfied condition "Succeeded or Failed" Nov 18 07:24:14.874: INFO: Got logs for pod "busybox-privileged-false-01e93e99-dd36-4e8f-b58e-811a8384c1a0": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:24:14.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7992" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":200,"skipped":3170,"failed":0} ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:24:14.893: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 18 07:24:15.072: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"ccb66ab2-81b2-4026-b46e-a45495e60892", Controller:(*bool)(0x4005ec2b1a), BlockOwnerDeletion:(*bool)(0x4005ec2b1b)}} Nov 18 07:24:15.086: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"9dbfa080-62e5-48c0-91bc-4ff3c7ed536e", Controller:(*bool)(0x40054e1cd2), BlockOwnerDeletion:(*bool)(0x40054e1cd3)}} Nov 18 07:24:15.119: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"b5573dc1-0962-4f5c-93e7-8cbcf863b007", Controller:(*bool)(0x4006605b8a), BlockOwnerDeletion:(*bool)(0x4006605b8b)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:24:20.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3623" for this suite. • [SLOW TEST:5.310 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":303,"completed":201,"skipped":3170,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:24:20.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Nov 18 07:24:20.432: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e87378ec-8bbd-44da-a26e-8a03d0283745" in namespace "projected-7029" to be "Succeeded or Failed" Nov 18 07:24:20.486: INFO: Pod "downwardapi-volume-e87378ec-8bbd-44da-a26e-8a03d0283745": Phase="Pending", Reason="", readiness=false. Elapsed: 53.855124ms Nov 18 07:24:22.495: INFO: Pod "downwardapi-volume-e87378ec-8bbd-44da-a26e-8a03d0283745": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062514391s Nov 18 07:24:24.502: INFO: Pod "downwardapi-volume-e87378ec-8bbd-44da-a26e-8a03d0283745": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.069741367s STEP: Saw pod success Nov 18 07:24:24.502: INFO: Pod "downwardapi-volume-e87378ec-8bbd-44da-a26e-8a03d0283745" satisfied condition "Succeeded or Failed" Nov 18 07:24:24.507: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-e87378ec-8bbd-44da-a26e-8a03d0283745 container client-container: STEP: delete the pod Nov 18 07:24:24.569: INFO: Waiting for pod downwardapi-volume-e87378ec-8bbd-44da-a26e-8a03d0283745 to disappear Nov 18 07:24:24.582: INFO: Pod downwardapi-volume-e87378ec-8bbd-44da-a26e-8a03d0283745 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:24:24.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7029" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":202,"skipped":3181,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:24:24.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 18 07:24:24.709: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Nov 18 07:24:45.552: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1246 create -f -' Nov 18 07:24:51.108: INFO: stderr: "" Nov 18 07:24:51.108: INFO: stdout: "e2e-test-crd-publish-openapi-1914-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Nov 18 07:24:51.109: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1246 delete e2e-test-crd-publish-openapi-1914-crds test-cr' Nov 18 07:24:52.529: INFO: stderr: "" Nov 18 07:24:52.529: INFO: stdout: "e2e-test-crd-publish-openapi-1914-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Nov 18 07:24:52.530: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1246 apply -f -' Nov 18 07:24:55.909: INFO: stderr: "" Nov 18 07:24:55.909: INFO: stdout: "e2e-test-crd-publish-openapi-1914-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Nov 18 07:24:55.910: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1246 delete e2e-test-crd-publish-openapi-1914-crds test-cr' Nov 18 07:24:57.275: INFO: stderr: "" Nov 18 07:24:57.275: INFO: stdout: "e2e-test-crd-publish-openapi-1914-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Nov 18 07:24:57.276: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1914-crds' Nov 18 07:25:00.712: INFO: stderr: "" Nov 18 07:25:00.712: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1914-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:25:11.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1246" for this suite. • [SLOW TEST:47.162 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":303,"completed":203,"skipped":3202,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:25:11.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:25:18.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-3481" for this suite. STEP: Destroying namespace "nsdeletetest-6751" for this suite. Nov 18 07:25:18.141: INFO: Namespace nsdeletetest-6751 was already deleted STEP: Destroying namespace "nsdeletetest-8234" for this suite. • [SLOW TEST:6.384 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":303,"completed":204,"skipped":3215,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:25:18.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Nov 18 07:25:23.838: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Nov 18 07:25:25.925: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741281123, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741281123, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741281123, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741281123, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-85d57b96d6\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 18 07:25:28.963: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 18 07:25:29.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:25:30.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-5387" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:12.794 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":303,"completed":205,"skipped":3265,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:25:30.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Nov 18 07:25:37.091: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-4653 PodName:pod-sharedvolume-1f8d3f2d-82e0-4292-a1cd-48db05e4d8ea ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 18 07:25:37.091: INFO: >>> kubeConfig: /root/.kube/config I1118 07:25:37.153237 10 log.go:181] (0x4004f0c6e0) (0x4001443ae0) Create stream I1118 07:25:37.153447 10 log.go:181] (0x4004f0c6e0) (0x4001443ae0) Stream added, broadcasting: 1 I1118 07:25:37.158773 10 log.go:181] (0x4004f0c6e0) Reply frame received for 1 I1118 07:25:37.159214 10 log.go:181] (0x4004f0c6e0) (0x4001ea3cc0) Create stream I1118 07:25:37.159415 10 log.go:181] (0x4004f0c6e0) (0x4001ea3cc0) Stream added, broadcasting: 3 I1118 07:25:37.162560 10 log.go:181] (0x4004f0c6e0) Reply frame received for 3 I1118 07:25:37.162748 10 log.go:181] (0x4004f0c6e0) (0x4001443b80) Create stream I1118 07:25:37.162844 10 log.go:181] (0x4004f0c6e0) (0x4001443b80) Stream added, broadcasting: 5 I1118 07:25:37.164633 10 log.go:181] (0x4004f0c6e0) Reply frame received for 5 I1118 07:25:37.252449 10 log.go:181] (0x4004f0c6e0) Data frame received for 5 I1118 07:25:37.252678 10 log.go:181] (0x4001443b80) (5) Data frame handling I1118 07:25:37.252823 10 log.go:181] (0x4004f0c6e0) Data frame received for 3 I1118 07:25:37.253074 10 log.go:181] (0x4001ea3cc0) (3) Data frame handling I1118 07:25:37.253196 10 log.go:181] (0x4001ea3cc0) (3) Data frame sent I1118 07:25:37.253286 10 log.go:181] (0x4004f0c6e0) Data frame received for 3 I1118 07:25:37.253364 10 log.go:181] (0x4001ea3cc0) (3) Data frame handling I1118 07:25:37.254092 10 log.go:181] (0x4004f0c6e0) Data frame received for 1 I1118 07:25:37.254210 10 log.go:181] (0x4001443ae0) (1) Data frame handling I1118 07:25:37.254312 10 log.go:181] (0x4001443ae0) (1) Data frame sent I1118 07:25:37.254412 10 log.go:181] (0x4004f0c6e0) (0x4001443ae0) Stream removed, broadcasting: 1 I1118 07:25:37.254539 10 log.go:181] (0x4004f0c6e0) Go away received I1118 07:25:37.255156 10 log.go:181] (0x4004f0c6e0) (0x4001443ae0) Stream removed, broadcasting: 1 I1118 07:25:37.255388 10 log.go:181] (0x4004f0c6e0) (0x4001ea3cc0) Stream removed, broadcasting: 3 I1118 07:25:37.255538 10 log.go:181] (0x4004f0c6e0) (0x4001443b80) Stream removed, broadcasting: 5 Nov 18 07:25:37.255: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:25:37.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4653" for this suite. • [SLOW TEST:6.324 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 pod should support shared volumes between containers [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":303,"completed":206,"skipped":3284,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:25:37.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs Nov 18 07:25:37.353: INFO: Waiting up to 5m0s for pod "pod-4e66c150-2bd4-4439-ac32-c371055b8872" in namespace "emptydir-1871" to be "Succeeded or Failed" Nov 18 07:25:37.362: INFO: Pod "pod-4e66c150-2bd4-4439-ac32-c371055b8872": Phase="Pending", Reason="", readiness=false. Elapsed: 9.516989ms Nov 18 07:25:39.371: INFO: Pod "pod-4e66c150-2bd4-4439-ac32-c371055b8872": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017677037s Nov 18 07:25:41.380: INFO: Pod "pod-4e66c150-2bd4-4439-ac32-c371055b8872": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026928598s STEP: Saw pod success Nov 18 07:25:41.380: INFO: Pod "pod-4e66c150-2bd4-4439-ac32-c371055b8872" satisfied condition "Succeeded or Failed" Nov 18 07:25:41.386: INFO: Trying to get logs from node leguer-worker2 pod pod-4e66c150-2bd4-4439-ac32-c371055b8872 container test-container: STEP: delete the pod Nov 18 07:25:41.443: INFO: Waiting for pod pod-4e66c150-2bd4-4439-ac32-c371055b8872 to disappear Nov 18 07:25:41.455: INFO: Pod pod-4e66c150-2bd4-4439-ac32-c371055b8872 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:25:41.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1871" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":207,"skipped":3308,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:25:41.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 18 07:25:44.186: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 18 07:25:46.253: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741281144, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741281144, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741281144, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741281144, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 18 07:25:49.301: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:25:49.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-421" for this suite. STEP: Destroying namespace "webhook-421-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.985 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":303,"completed":208,"skipped":3325,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:25:49.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Nov 18 07:25:51.097: INFO: Pod name wrapped-volume-race-fb9179f4-f7ab-4c06-b674-c60bea3b2f8b: Found 0 pods out of 5 Nov 18 07:25:56.124: INFO: Pod name wrapped-volume-race-fb9179f4-f7ab-4c06-b674-c60bea3b2f8b: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-fb9179f4-f7ab-4c06-b674-c60bea3b2f8b in namespace emptydir-wrapper-3526, will wait for the garbage collector to delete the pods Nov 18 07:26:10.277: INFO: Deleting ReplicationController wrapped-volume-race-fb9179f4-f7ab-4c06-b674-c60bea3b2f8b took: 9.607596ms Nov 18 07:26:10.778: INFO: Terminating ReplicationController wrapped-volume-race-fb9179f4-f7ab-4c06-b674-c60bea3b2f8b pods took: 500.62868ms STEP: Creating RC which spawns configmap-volume pods Nov 18 07:26:19.878: INFO: Pod name wrapped-volume-race-3037d9e5-f41b-4aba-b171-3a8118c2b639: Found 1 pods out of 5 Nov 18 07:26:24.903: INFO: Pod name wrapped-volume-race-3037d9e5-f41b-4aba-b171-3a8118c2b639: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-3037d9e5-f41b-4aba-b171-3a8118c2b639 in namespace emptydir-wrapper-3526, will wait for the garbage collector to delete the pods Nov 18 07:26:39.054: INFO: Deleting ReplicationController wrapped-volume-race-3037d9e5-f41b-4aba-b171-3a8118c2b639 took: 9.098135ms Nov 18 07:26:39.555: INFO: Terminating ReplicationController wrapped-volume-race-3037d9e5-f41b-4aba-b171-3a8118c2b639 pods took: 500.796143ms STEP: Creating RC which spawns configmap-volume pods Nov 18 07:26:49.936: INFO: Pod name wrapped-volume-race-5fbbfd33-bb8c-4b25-9d05-8be714144035: Found 0 pods out of 5 Nov 18 07:26:54.961: INFO: Pod name wrapped-volume-race-5fbbfd33-bb8c-4b25-9d05-8be714144035: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-5fbbfd33-bb8c-4b25-9d05-8be714144035 in namespace emptydir-wrapper-3526, will wait for the garbage collector to delete the pods Nov 18 07:27:11.108: INFO: Deleting ReplicationController wrapped-volume-race-5fbbfd33-bb8c-4b25-9d05-8be714144035 took: 38.674734ms Nov 18 07:27:11.608: INFO: Terminating ReplicationController wrapped-volume-race-5fbbfd33-bb8c-4b25-9d05-8be714144035 pods took: 500.727237ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:27:21.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-3526" for this suite. • [SLOW TEST:92.115 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":303,"completed":209,"skipped":3389,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:27:21.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9879.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-9879.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9879.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9879.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-9879.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9879.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Nov 18 07:27:27.892: INFO: DNS probes using dns-9879/dns-test-7a59d223-232b-4aef-9403-9c78f27bae37 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:27:27.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9879" for this suite. • [SLOW TEST:6.490 seconds] [sig-network] DNS /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":303,"completed":210,"skipped":3414,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:27:28.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 18 07:27:32.101: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 18 07:27:34.131: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741281252, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741281252, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741281252, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741281252, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 18 07:27:37.292: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Nov 18 07:27:41.395: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config attach --namespace=webhook-8519 to-be-attached-pod -i -c=container1' Nov 18 07:27:42.899: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:27:42.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8519" for this suite. STEP: Destroying namespace "webhook-8519-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:14.981 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":303,"completed":211,"skipped":3467,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:27:43.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should support --unix-socket=/path [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Starting the proxy Nov 18 07:27:43.157: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix401171969/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:27:44.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-810" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":303,"completed":212,"skipped":3474,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Ingress API should support creating Ingress API operations [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Ingress API /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:27:44.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingress STEP: Waiting for a default service account to be provisioned in namespace [It] should support creating Ingress API operations [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Nov 18 07:27:44.547: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Nov 18 07:27:44.555: INFO: starting watch STEP: patching STEP: updating Nov 18 07:27:44.604: INFO: waiting for watch events with expected annotations Nov 18 07:27:44.605: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: get /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] Ingress API /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:27:44.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingress-553" for this suite. •{"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":303,"completed":213,"skipped":3504,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:27:44.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-3140 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-3140 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3140 Nov 18 07:27:45.001: INFO: Found 0 stateful pods, waiting for 1 Nov 18 07:27:55.010: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Nov 18 07:27:55.017: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 18 07:27:56.716: INFO: stderr: "I1118 07:27:56.552529 2631 log.go:181] (0x400003a790) (0x400097c140) Create stream\nI1118 07:27:56.556070 2631 log.go:181] (0x400003a790) (0x400097c140) Stream added, broadcasting: 1\nI1118 07:27:56.583139 2631 log.go:181] (0x400003a790) Reply frame received for 1\nI1118 07:27:56.583702 2631 log.go:181] (0x400003a790) (0x4000c8c000) Create stream\nI1118 07:27:56.583763 2631 log.go:181] (0x400003a790) (0x4000c8c000) Stream added, broadcasting: 3\nI1118 07:27:56.585396 2631 log.go:181] (0x400003a790) Reply frame received for 3\nI1118 07:27:56.585672 2631 log.go:181] (0x400003a790) (0x4000c8c0a0) Create stream\nI1118 07:27:56.585725 2631 log.go:181] (0x400003a790) (0x4000c8c0a0) Stream added, broadcasting: 5\nI1118 07:27:56.586728 2631 log.go:181] (0x400003a790) Reply frame received for 5\nI1118 07:27:56.662521 2631 log.go:181] (0x400003a790) Data frame received for 5\nI1118 07:27:56.662868 2631 log.go:181] (0x4000c8c0a0) (5) Data frame handling\nI1118 07:27:56.663721 2631 log.go:181] (0x4000c8c0a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1118 07:27:56.694386 2631 log.go:181] (0x400003a790) Data frame received for 3\nI1118 07:27:56.694690 2631 log.go:181] (0x4000c8c000) (3) Data frame handling\nI1118 07:27:56.694888 2631 log.go:181] (0x400003a790) Data frame received for 5\nI1118 07:27:56.695021 2631 log.go:181] (0x4000c8c0a0) (5) Data frame handling\nI1118 07:27:56.695319 2631 log.go:181] (0x4000c8c000) (3) Data frame sent\nI1118 07:27:56.695539 2631 log.go:181] (0x400003a790) Data frame received for 3\nI1118 07:27:56.695701 2631 log.go:181] (0x4000c8c000) (3) Data frame handling\nI1118 07:27:56.696064 2631 log.go:181] (0x400003a790) Data frame received for 1\nI1118 07:27:56.696158 2631 log.go:181] (0x400097c140) (1) Data frame handling\nI1118 07:27:56.696257 2631 log.go:181] (0x400097c140) (1) Data frame sent\nI1118 07:27:56.697730 2631 log.go:181] (0x400003a790) (0x400097c140) Stream removed, broadcasting: 1\nI1118 07:27:56.701394 2631 log.go:181] (0x400003a790) Go away received\nI1118 07:27:56.706450 2631 log.go:181] (0x400003a790) (0x400097c140) Stream removed, broadcasting: 1\nI1118 07:27:56.706783 2631 log.go:181] (0x400003a790) (0x4000c8c000) Stream removed, broadcasting: 3\nI1118 07:27:56.707015 2631 log.go:181] (0x400003a790) (0x4000c8c0a0) Stream removed, broadcasting: 5\n" Nov 18 07:27:56.717: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Nov 18 07:27:56.717: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Nov 18 07:27:56.724: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Nov 18 07:28:06.733: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Nov 18 07:28:06.733: INFO: Waiting for statefulset status.replicas updated to 0 Nov 18 07:28:06.775: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.99992482s Nov 18 07:28:07.782: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.973309263s Nov 18 07:28:08.791: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.966254107s Nov 18 07:28:09.807: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.957213063s Nov 18 07:28:10.816: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.941417356s Nov 18 07:28:11.849: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.932227891s Nov 18 07:28:12.855: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.899030175s Nov 18 07:28:13.871: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.893014485s Nov 18 07:28:14.879: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.877488263s Nov 18 07:28:15.887: INFO: Verifying statefulset ss doesn't scale past 1 for another 868.666335ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3140 Nov 18 07:28:16.897: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 18 07:28:18.497: INFO: stderr: "I1118 07:28:18.366248 2652 log.go:181] (0x40006a6000) (0x4000e021e0) Create stream\nI1118 07:28:18.372206 2652 log.go:181] (0x40006a6000) (0x4000e021e0) Stream added, broadcasting: 1\nI1118 07:28:18.396758 2652 log.go:181] (0x40006a6000) Reply frame received for 1\nI1118 07:28:18.397397 2652 log.go:181] (0x40006a6000) (0x40006cc000) Create stream\nI1118 07:28:18.397465 2652 log.go:181] (0x40006a6000) (0x40006cc000) Stream added, broadcasting: 3\nI1118 07:28:18.398921 2652 log.go:181] (0x40006a6000) Reply frame received for 3\nI1118 07:28:18.399234 2652 log.go:181] (0x40006a6000) (0x40006cc0a0) Create stream\nI1118 07:28:18.399300 2652 log.go:181] (0x40006a6000) (0x40006cc0a0) Stream added, broadcasting: 5\nI1118 07:28:18.400412 2652 log.go:181] (0x40006a6000) Reply frame received for 5\nI1118 07:28:18.478000 2652 log.go:181] (0x40006a6000) Data frame received for 5\nI1118 07:28:18.478595 2652 log.go:181] (0x40006a6000) Data frame received for 3\nI1118 07:28:18.478689 2652 log.go:181] (0x40006cc000) (3) Data frame handling\nI1118 07:28:18.478773 2652 log.go:181] (0x40006a6000) Data frame received for 1\nI1118 07:28:18.478904 2652 log.go:181] (0x4000e021e0) (1) Data frame handling\nI1118 07:28:18.479038 2652 log.go:181] (0x40006cc0a0) (5) Data frame handling\nI1118 07:28:18.479815 2652 log.go:181] (0x40006cc000) (3) Data frame sent\nI1118 07:28:18.480629 2652 log.go:181] (0x40006cc0a0) (5) Data frame sent\nI1118 07:28:18.481080 2652 log.go:181] (0x40006a6000) Data frame received for 3\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI1118 07:28:18.481183 2652 log.go:181] (0x40006cc000) (3) Data frame handling\nI1118 07:28:18.481378 2652 log.go:181] (0x4000e021e0) (1) Data frame sent\nI1118 07:28:18.481540 2652 log.go:181] (0x40006a6000) Data frame received for 5\nI1118 07:28:18.481667 2652 log.go:181] (0x40006cc0a0) (5) Data frame handling\nI1118 07:28:18.483642 2652 log.go:181] (0x40006a6000) (0x4000e021e0) Stream removed, broadcasting: 1\nI1118 07:28:18.485254 2652 log.go:181] (0x40006a6000) Go away received\nI1118 07:28:18.488138 2652 log.go:181] (0x40006a6000) (0x4000e021e0) Stream removed, broadcasting: 1\nI1118 07:28:18.488429 2652 log.go:181] (0x40006a6000) (0x40006cc000) Stream removed, broadcasting: 3\nI1118 07:28:18.488630 2652 log.go:181] (0x40006a6000) (0x40006cc0a0) Stream removed, broadcasting: 5\n" Nov 18 07:28:18.497: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Nov 18 07:28:18.498: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Nov 18 07:28:18.513: INFO: Found 1 stateful pods, waiting for 3 Nov 18 07:28:28.527: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Nov 18 07:28:28.527: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Nov 18 07:28:28.527: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=false Nov 18 07:28:38.523: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Nov 18 07:28:38.524: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Nov 18 07:28:38.524: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Nov 18 07:28:38.541: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 18 07:28:40.090: INFO: stderr: "I1118 07:28:39.983332 2672 log.go:181] (0x400063edc0) (0x4000894c80) Create stream\nI1118 07:28:39.987354 2672 log.go:181] (0x400063edc0) (0x4000894c80) Stream added, broadcasting: 1\nI1118 07:28:39.999732 2672 log.go:181] (0x400063edc0) Reply frame received for 1\nI1118 07:28:40.000624 2672 log.go:181] (0x400063edc0) (0x40009a61e0) Create stream\nI1118 07:28:40.000716 2672 log.go:181] (0x400063edc0) (0x40009a61e0) Stream added, broadcasting: 3\nI1118 07:28:40.002353 2672 log.go:181] (0x400063edc0) Reply frame received for 3\nI1118 07:28:40.002677 2672 log.go:181] (0x400063edc0) (0x4000c041e0) Create stream\nI1118 07:28:40.002755 2672 log.go:181] (0x400063edc0) (0x4000c041e0) Stream added, broadcasting: 5\nI1118 07:28:40.004162 2672 log.go:181] (0x400063edc0) Reply frame received for 5\nI1118 07:28:40.055030 2672 log.go:181] (0x400063edc0) Data frame received for 3\nI1118 07:28:40.055444 2672 log.go:181] (0x400063edc0) Data frame received for 5\nI1118 07:28:40.055742 2672 log.go:181] (0x4000c041e0) (5) Data frame handling\nI1118 07:28:40.056434 2672 log.go:181] (0x400063edc0) Data frame received for 1\nI1118 07:28:40.056588 2672 log.go:181] (0x4000894c80) (1) Data frame handling\nI1118 07:28:40.056807 2672 log.go:181] (0x40009a61e0) (3) Data frame handling\nI1118 07:28:40.057104 2672 log.go:181] (0x4000894c80) (1) Data frame sent\nI1118 07:28:40.057321 2672 log.go:181] (0x40009a61e0) (3) Data frame sent\nI1118 07:28:40.057644 2672 log.go:181] (0x4000c041e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1118 07:28:40.058314 2672 log.go:181] (0x400063edc0) Data frame received for 5\nI1118 07:28:40.058444 2672 log.go:181] (0x4000c041e0) (5) Data frame handling\nI1118 07:28:40.058786 2672 log.go:181] (0x400063edc0) Data frame received for 3\nI1118 07:28:40.058916 2672 log.go:181] (0x40009a61e0) (3) Data frame handling\nI1118 07:28:40.060090 2672 log.go:181] (0x400063edc0) (0x4000894c80) Stream removed, broadcasting: 1\nI1118 07:28:40.062493 2672 log.go:181] (0x400063edc0) Go away received\nI1118 07:28:40.079418 2672 log.go:181] (0x400063edc0) (0x4000894c80) Stream removed, broadcasting: 1\nI1118 07:28:40.079714 2672 log.go:181] (0x400063edc0) (0x40009a61e0) Stream removed, broadcasting: 3\nI1118 07:28:40.079905 2672 log.go:181] (0x400063edc0) (0x4000c041e0) Stream removed, broadcasting: 5\n" Nov 18 07:28:40.091: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Nov 18 07:28:40.091: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Nov 18 07:28:40.091: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 18 07:28:41.729: INFO: stderr: "I1118 07:28:41.567000 2692 log.go:181] (0x400026d6b0) (0x4000412280) Create stream\nI1118 07:28:41.571446 2692 log.go:181] (0x400026d6b0) (0x4000412280) Stream added, broadcasting: 1\nI1118 07:28:41.585457 2692 log.go:181] (0x400026d6b0) Reply frame received for 1\nI1118 07:28:41.586347 2692 log.go:181] (0x400026d6b0) (0x4000b57b80) Create stream\nI1118 07:28:41.586421 2692 log.go:181] (0x400026d6b0) (0x4000b57b80) Stream added, broadcasting: 3\nI1118 07:28:41.587685 2692 log.go:181] (0x400026d6b0) Reply frame received for 3\nI1118 07:28:41.587895 2692 log.go:181] (0x400026d6b0) (0x4000b57c20) Create stream\nI1118 07:28:41.587944 2692 log.go:181] (0x400026d6b0) (0x4000b57c20) Stream added, broadcasting: 5\nI1118 07:28:41.589156 2692 log.go:181] (0x400026d6b0) Reply frame received for 5\nI1118 07:28:41.673910 2692 log.go:181] (0x400026d6b0) Data frame received for 5\nI1118 07:28:41.674084 2692 log.go:181] (0x4000b57c20) (5) Data frame handling\nI1118 07:28:41.674428 2692 log.go:181] (0x4000b57c20) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1118 07:28:41.706797 2692 log.go:181] (0x400026d6b0) Data frame received for 3\nI1118 07:28:41.706871 2692 log.go:181] (0x4000b57b80) (3) Data frame handling\nI1118 07:28:41.706942 2692 log.go:181] (0x4000b57b80) (3) Data frame sent\nI1118 07:28:41.706996 2692 log.go:181] (0x400026d6b0) Data frame received for 3\nI1118 07:28:41.707066 2692 log.go:181] (0x4000b57b80) (3) Data frame handling\nI1118 07:28:41.707283 2692 log.go:181] (0x400026d6b0) Data frame received for 5\nI1118 07:28:41.707450 2692 log.go:181] (0x4000b57c20) (5) Data frame handling\nI1118 07:28:41.710089 2692 log.go:181] (0x400026d6b0) Data frame received for 1\nI1118 07:28:41.710192 2692 log.go:181] (0x4000412280) (1) Data frame handling\nI1118 07:28:41.710301 2692 log.go:181] (0x4000412280) (1) Data frame sent\nI1118 07:28:41.714321 2692 log.go:181] (0x400026d6b0) (0x4000412280) Stream removed, broadcasting: 1\nI1118 07:28:41.715107 2692 log.go:181] (0x400026d6b0) Go away received\nI1118 07:28:41.719164 2692 log.go:181] (0x400026d6b0) (0x4000412280) Stream removed, broadcasting: 1\nI1118 07:28:41.719470 2692 log.go:181] (0x400026d6b0) (0x4000b57b80) Stream removed, broadcasting: 3\nI1118 07:28:41.719686 2692 log.go:181] (0x400026d6b0) (0x4000b57c20) Stream removed, broadcasting: 5\n" Nov 18 07:28:41.731: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Nov 18 07:28:41.731: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Nov 18 07:28:41.731: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 18 07:28:43.533: INFO: stderr: "I1118 07:28:43.370218 2712 log.go:181] (0x400012a000) (0x4000bdc000) Create stream\nI1118 07:28:43.376898 2712 log.go:181] (0x400012a000) (0x4000bdc000) Stream added, broadcasting: 1\nI1118 07:28:43.392234 2712 log.go:181] (0x400012a000) Reply frame received for 1\nI1118 07:28:43.393718 2712 log.go:181] (0x400012a000) (0x40008d50e0) Create stream\nI1118 07:28:43.393846 2712 log.go:181] (0x400012a000) (0x40008d50e0) Stream added, broadcasting: 3\nI1118 07:28:43.395886 2712 log.go:181] (0x400012a000) Reply frame received for 3\nI1118 07:28:43.396239 2712 log.go:181] (0x400012a000) (0x40008d5360) Create stream\nI1118 07:28:43.396342 2712 log.go:181] (0x400012a000) (0x40008d5360) Stream added, broadcasting: 5\nI1118 07:28:43.397855 2712 log.go:181] (0x400012a000) Reply frame received for 5\nI1118 07:28:43.484304 2712 log.go:181] (0x400012a000) Data frame received for 5\nI1118 07:28:43.484548 2712 log.go:181] (0x40008d5360) (5) Data frame handling\nI1118 07:28:43.485131 2712 log.go:181] (0x40008d5360) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1118 07:28:43.511068 2712 log.go:181] (0x400012a000) Data frame received for 3\nI1118 07:28:43.511254 2712 log.go:181] (0x40008d50e0) (3) Data frame handling\nI1118 07:28:43.511495 2712 log.go:181] (0x40008d50e0) (3) Data frame sent\nI1118 07:28:43.511619 2712 log.go:181] (0x400012a000) Data frame received for 3\nI1118 07:28:43.511786 2712 log.go:181] (0x400012a000) Data frame received for 5\nI1118 07:28:43.511971 2712 log.go:181] (0x40008d5360) (5) Data frame handling\nI1118 07:28:43.512224 2712 log.go:181] (0x40008d50e0) (3) Data frame handling\nI1118 07:28:43.515705 2712 log.go:181] (0x400012a000) Data frame received for 1\nI1118 07:28:43.515879 2712 log.go:181] (0x4000bdc000) (1) Data frame handling\nI1118 07:28:43.516039 2712 log.go:181] (0x4000bdc000) (1) Data frame sent\nI1118 07:28:43.517287 2712 log.go:181] (0x400012a000) (0x4000bdc000) Stream removed, broadcasting: 1\nI1118 07:28:43.518992 2712 log.go:181] (0x400012a000) Go away received\nI1118 07:28:43.522270 2712 log.go:181] (0x400012a000) (0x4000bdc000) Stream removed, broadcasting: 1\nI1118 07:28:43.523243 2712 log.go:181] (0x400012a000) (0x40008d50e0) Stream removed, broadcasting: 3\nI1118 07:28:43.523752 2712 log.go:181] (0x400012a000) (0x40008d5360) Stream removed, broadcasting: 5\n" Nov 18 07:28:43.534: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Nov 18 07:28:43.534: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Nov 18 07:28:43.534: INFO: Waiting for statefulset status.replicas updated to 0 Nov 18 07:28:43.544: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Nov 18 07:28:53.561: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Nov 18 07:28:53.561: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Nov 18 07:28:53.561: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Nov 18 07:28:53.600: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999988657s Nov 18 07:28:54.611: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.972364493s Nov 18 07:28:55.622: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.961362344s Nov 18 07:28:56.632: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.950290445s Nov 18 07:28:57.996: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.940109523s Nov 18 07:28:59.007: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.576011633s Nov 18 07:29:00.037: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.565229766s Nov 18 07:29:02.270: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.535437974s Nov 18 07:29:03.279: INFO: Verifying statefulset ss doesn't scale past 3 for another 302.301746ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3140 Nov 18 07:29:04.292: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 18 07:29:05.964: INFO: stderr: "I1118 07:29:05.831096 2732 log.go:181] (0x40001b2370) (0x4000d98000) Create stream\nI1118 07:29:05.836378 2732 log.go:181] (0x40001b2370) (0x4000d98000) Stream added, broadcasting: 1\nI1118 07:29:05.845268 2732 log.go:181] (0x40001b2370) Reply frame received for 1\nI1118 07:29:05.845836 2732 log.go:181] (0x40001b2370) (0x4000520140) Create stream\nI1118 07:29:05.845921 2732 log.go:181] (0x40001b2370) (0x4000520140) Stream added, broadcasting: 3\nI1118 07:29:05.847147 2732 log.go:181] (0x40001b2370) Reply frame received for 3\nI1118 07:29:05.847342 2732 log.go:181] (0x40001b2370) (0x4000d980a0) Create stream\nI1118 07:29:05.847393 2732 log.go:181] (0x40001b2370) (0x4000d980a0) Stream added, broadcasting: 5\nI1118 07:29:05.848762 2732 log.go:181] (0x40001b2370) Reply frame received for 5\nI1118 07:29:05.940948 2732 log.go:181] (0x40001b2370) Data frame received for 5\nI1118 07:29:05.941321 2732 log.go:181] (0x40001b2370) Data frame received for 1\nI1118 07:29:05.941456 2732 log.go:181] (0x4000d98000) (1) Data frame handling\nI1118 07:29:05.941542 2732 log.go:181] (0x40001b2370) Data frame received for 3\nI1118 07:29:05.941641 2732 log.go:181] (0x4000520140) (3) Data frame handling\nI1118 07:29:05.941751 2732 log.go:181] (0x4000d980a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI1118 07:29:05.943496 2732 log.go:181] (0x4000520140) (3) Data frame sent\nI1118 07:29:05.943679 2732 log.go:181] (0x4000d980a0) (5) Data frame sent\nI1118 07:29:05.943948 2732 log.go:181] (0x40001b2370) Data frame received for 5\nI1118 07:29:05.944043 2732 log.go:181] (0x40001b2370) Data frame received for 3\nI1118 07:29:05.944120 2732 log.go:181] (0x4000520140) (3) Data frame handling\nI1118 07:29:05.944320 2732 log.go:181] (0x4000d980a0) (5) Data frame handling\nI1118 07:29:05.944707 2732 log.go:181] (0x4000d98000) (1) Data frame sent\nI1118 07:29:05.946170 2732 log.go:181] (0x40001b2370) (0x4000d98000) Stream removed, broadcasting: 1\nI1118 07:29:05.948047 2732 log.go:181] (0x40001b2370) Go away received\nI1118 07:29:05.952072 2732 log.go:181] (0x40001b2370) (0x4000d98000) Stream removed, broadcasting: 1\nI1118 07:29:05.952466 2732 log.go:181] (0x40001b2370) (0x4000520140) Stream removed, broadcasting: 3\nI1118 07:29:05.952750 2732 log.go:181] (0x40001b2370) (0x4000d980a0) Stream removed, broadcasting: 5\n" Nov 18 07:29:05.965: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Nov 18 07:29:05.965: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Nov 18 07:29:05.965: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 18 07:29:07.630: INFO: stderr: "I1118 07:29:07.460141 2752 log.go:181] (0x400003a0b0) (0x40001c2320) Create stream\nI1118 07:29:07.465649 2752 log.go:181] (0x400003a0b0) (0x40001c2320) Stream added, broadcasting: 1\nI1118 07:29:07.480494 2752 log.go:181] (0x400003a0b0) Reply frame received for 1\nI1118 07:29:07.481136 2752 log.go:181] (0x400003a0b0) (0x40001c23c0) Create stream\nI1118 07:29:07.481194 2752 log.go:181] (0x400003a0b0) (0x40001c23c0) Stream added, broadcasting: 3\nI1118 07:29:07.482922 2752 log.go:181] (0x400003a0b0) Reply frame received for 3\nI1118 07:29:07.483354 2752 log.go:181] (0x400003a0b0) (0x4000b8e640) Create stream\nI1118 07:29:07.483456 2752 log.go:181] (0x400003a0b0) (0x4000b8e640) Stream added, broadcasting: 5\nI1118 07:29:07.485121 2752 log.go:181] (0x400003a0b0) Reply frame received for 5\nI1118 07:29:07.585936 2752 log.go:181] (0x400003a0b0) Data frame received for 1\nI1118 07:29:07.586305 2752 log.go:181] (0x400003a0b0) Data frame received for 5\nI1118 07:29:07.586428 2752 log.go:181] (0x4000b8e640) (5) Data frame handling\nI1118 07:29:07.586500 2752 log.go:181] (0x400003a0b0) Data frame received for 3\nI1118 07:29:07.586585 2752 log.go:181] (0x40001c23c0) (3) Data frame handling\nI1118 07:29:07.586751 2752 log.go:181] (0x40001c2320) (1) Data frame handling\nI1118 07:29:07.587755 2752 log.go:181] (0x40001c23c0) (3) Data frame sent\nI1118 07:29:07.587979 2752 log.go:181] (0x400003a0b0) Data frame received for 3\nI1118 07:29:07.588046 2752 log.go:181] (0x40001c23c0) (3) Data frame handling\nI1118 07:29:07.588316 2752 log.go:181] (0x40001c2320) (1) Data frame sent\nI1118 07:29:07.588920 2752 log.go:181] (0x4000b8e640) (5) Data frame sent\nI1118 07:29:07.589019 2752 log.go:181] (0x400003a0b0) Data frame received for 5\nI1118 07:29:07.589094 2752 log.go:181] (0x4000b8e640) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI1118 07:29:07.591681 2752 log.go:181] (0x400003a0b0) (0x40001c2320) Stream removed, broadcasting: 1\nI1118 07:29:07.596425 2752 log.go:181] (0x400003a0b0) Go away received\nI1118 07:29:07.614959 2752 log.go:181] (0x400003a0b0) (0x40001c2320) Stream removed, broadcasting: 1\nI1118 07:29:07.615382 2752 log.go:181] (0x400003a0b0) (0x40001c23c0) Stream removed, broadcasting: 3\nI1118 07:29:07.615728 2752 log.go:181] (0x400003a0b0) (0x4000b8e640) Stream removed, broadcasting: 5\n" Nov 18 07:29:07.630: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Nov 18 07:29:07.630: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Nov 18 07:29:07.631: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 18 07:29:09.269: INFO: rc: 1 Nov 18 07:29:09.270: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Nov 18 07:29:19.271: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 18 07:29:20.717: INFO: rc: 1 Nov 18 07:29:20.718: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Nov 18 07:29:30.718: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 18 07:29:32.089: INFO: rc: 1 Nov 18 07:29:32.089: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Nov 18 07:29:42.090: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 18 07:29:43.472: INFO: rc: 1 Nov 18 07:29:43.472: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Nov 18 07:29:53.473: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 18 07:29:54.938: INFO: rc: 1 Nov 18 07:29:54.938: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Nov 18 07:30:04.939: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 18 07:30:06.360: INFO: rc: 1 Nov 18 07:30:06.361: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Nov 18 07:30:16.362: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 18 07:30:17.705: INFO: rc: 1 Nov 18 07:30:17.705: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Nov 18 07:30:27.707: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 18 07:30:29.049: INFO: rc: 1 Nov 18 07:30:29.049: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Nov 18 07:30:39.050: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 18 07:30:40.468: INFO: rc: 1 Nov 18 07:30:40.468: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Nov 18 07:30:50.469: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 18 07:30:51.880: INFO: rc: 1 Nov 18 07:30:51.881: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Nov 18 07:31:01.882: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 18 07:31:03.259: INFO: rc: 1 Nov 18 07:31:03.260: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Nov 18 07:31:13.261: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 18 07:31:14.660: INFO: rc: 1 Nov 18 07:31:14.660: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Nov 18 07:31:24.661: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 18 07:31:26.036: INFO: rc: 1 Nov 18 07:31:26.036: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Nov 18 07:31:36.037: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 18 07:31:37.439: INFO: rc: 1 Nov 18 07:31:37.439: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Nov 18 07:31:47.440: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 18 07:31:48.783: INFO: rc: 1 Nov 18 07:31:48.783: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Nov 18 07:31:58.784: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 18 07:32:00.210: INFO: rc: 1 Nov 18 07:32:00.211: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Nov 18 07:32:10.212: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 18 07:32:11.582: INFO: rc: 1 Nov 18 07:32:11.583: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Nov 18 07:32:21.584: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 18 07:32:23.006: INFO: rc: 1 Nov 18 07:32:23.006: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Nov 18 07:32:33.007: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 18 07:32:34.487: INFO: rc: 1 Nov 18 07:32:34.487: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Nov 18 07:32:44.488: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 18 07:32:45.882: INFO: rc: 1 Nov 18 07:32:45.882: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Nov 18 07:32:55.883: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 18 07:32:57.360: INFO: rc: 1 Nov 18 07:32:57.360: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Nov 18 07:33:07.361: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 18 07:33:08.738: INFO: rc: 1 Nov 18 07:33:08.739: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Nov 18 07:33:18.739: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 18 07:33:20.153: INFO: rc: 1 Nov 18 07:33:20.154: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Nov 18 07:33:30.155: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 18 07:33:31.473: INFO: rc: 1 Nov 18 07:33:31.473: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Nov 18 07:33:41.474: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 18 07:33:43.002: INFO: rc: 1 Nov 18 07:33:43.002: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Nov 18 07:33:53.003: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 18 07:33:54.592: INFO: rc: 1 Nov 18 07:33:54.592: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Nov 18 07:34:04.593: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 18 07:34:05.917: INFO: rc: 1 Nov 18 07:34:05.917: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Nov 18 07:34:15.918: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3140 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 18 07:34:17.487: INFO: rc: 1 Nov 18 07:34:17.488: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: Nov 18 07:34:17.488: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Nov 18 07:34:17.504: INFO: Deleting all statefulset in ns statefulset-3140 Nov 18 07:34:17.508: INFO: Scaling statefulset ss to 0 Nov 18 07:34:17.521: INFO: Waiting for statefulset status.replicas updated to 0 Nov 18 07:34:17.525: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:34:17.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3140" for this suite. • [SLOW TEST:392.792 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":303,"completed":214,"skipped":3514,"failed":0} SSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:34:17.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should provide secure master service [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:34:17.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3859" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":303,"completed":215,"skipped":3520,"failed":0} SSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:34:17.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test env composition Nov 18 07:34:17.746: INFO: Waiting up to 5m0s for pod "var-expansion-04303a5d-0507-4e5d-b3b1-0a8b7a2054b3" in namespace "var-expansion-8930" to be "Succeeded or Failed" Nov 18 07:34:17.767: INFO: Pod "var-expansion-04303a5d-0507-4e5d-b3b1-0a8b7a2054b3": Phase="Pending", Reason="", readiness=false. Elapsed: 20.934533ms Nov 18 07:34:19.775: INFO: Pod "var-expansion-04303a5d-0507-4e5d-b3b1-0a8b7a2054b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028370519s Nov 18 07:34:21.828: INFO: Pod "var-expansion-04303a5d-0507-4e5d-b3b1-0a8b7a2054b3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081683352s Nov 18 07:34:23.835: INFO: Pod "var-expansion-04303a5d-0507-4e5d-b3b1-0a8b7a2054b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.088280921s STEP: Saw pod success Nov 18 07:34:23.835: INFO: Pod "var-expansion-04303a5d-0507-4e5d-b3b1-0a8b7a2054b3" satisfied condition "Succeeded or Failed" Nov 18 07:34:23.839: INFO: Trying to get logs from node leguer-worker pod var-expansion-04303a5d-0507-4e5d-b3b1-0a8b7a2054b3 container dapi-container: STEP: delete the pod Nov 18 07:34:23.908: INFO: Waiting for pod var-expansion-04303a5d-0507-4e5d-b3b1-0a8b7a2054b3 to disappear Nov 18 07:34:23.920: INFO: Pod var-expansion-04303a5d-0507-4e5d-b3b1-0a8b7a2054b3 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:34:23.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8930" for this suite. • [SLOW TEST:6.282 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":303,"completed":216,"skipped":3527,"failed":0} SSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:34:23.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [sig-storage] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in volume subpath Nov 18 07:34:24.055: INFO: Waiting up to 5m0s for pod "var-expansion-d0f14732-10f3-4e4a-bdac-6a0d4b62f5b2" in namespace "var-expansion-2216" to be "Succeeded or Failed" Nov 18 07:34:24.070: INFO: Pod "var-expansion-d0f14732-10f3-4e4a-bdac-6a0d4b62f5b2": Phase="Pending", Reason="", readiness=false. Elapsed: 14.741016ms Nov 18 07:34:26.124: INFO: Pod "var-expansion-d0f14732-10f3-4e4a-bdac-6a0d4b62f5b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068851862s Nov 18 07:34:28.133: INFO: Pod "var-expansion-d0f14732-10f3-4e4a-bdac-6a0d4b62f5b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.077582622s STEP: Saw pod success Nov 18 07:34:28.133: INFO: Pod "var-expansion-d0f14732-10f3-4e4a-bdac-6a0d4b62f5b2" satisfied condition "Succeeded or Failed" Nov 18 07:34:28.139: INFO: Trying to get logs from node leguer-worker2 pod var-expansion-d0f14732-10f3-4e4a-bdac-6a0d4b62f5b2 container dapi-container: STEP: delete the pod Nov 18 07:34:28.221: INFO: Waiting for pod var-expansion-d0f14732-10f3-4e4a-bdac-6a0d4b62f5b2 to disappear Nov 18 07:34:28.228: INFO: Pod var-expansion-d0f14732-10f3-4e4a-bdac-6a0d4b62f5b2 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:34:28.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2216" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":303,"completed":217,"skipped":3531,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:34:28.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:34:39.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5128" for this suite. • [SLOW TEST:11.277 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":303,"completed":218,"skipped":3534,"failed":0} SSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:34:39.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-1350 [It] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet Nov 18 07:34:39.667: INFO: Found 0 stateful pods, waiting for 3 Nov 18 07:34:49.702: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Nov 18 07:34:49.702: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Nov 18 07:34:49.702: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Nov 18 07:34:59.677: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Nov 18 07:34:59.678: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Nov 18 07:34:59.678: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Nov 18 07:34:59.699: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1350 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 18 07:35:04.431: INFO: stderr: "I1118 07:35:04.301986 3338 log.go:181] (0x40000360b0) (0x4001032000) Create stream\nI1118 07:35:04.305988 3338 log.go:181] (0x40000360b0) (0x4001032000) Stream added, broadcasting: 1\nI1118 07:35:04.319299 3338 log.go:181] (0x40000360b0) Reply frame received for 1\nI1118 07:35:04.320285 3338 log.go:181] (0x40000360b0) (0x4000234000) Create stream\nI1118 07:35:04.320381 3338 log.go:181] (0x40000360b0) (0x4000234000) Stream added, broadcasting: 3\nI1118 07:35:04.322045 3338 log.go:181] (0x40000360b0) Reply frame received for 3\nI1118 07:35:04.322454 3338 log.go:181] (0x40000360b0) (0x4001032140) Create stream\nI1118 07:35:04.322542 3338 log.go:181] (0x40000360b0) (0x4001032140) Stream added, broadcasting: 5\nI1118 07:35:04.324116 3338 log.go:181] (0x40000360b0) Reply frame received for 5\nI1118 07:35:04.378192 3338 log.go:181] (0x40000360b0) Data frame received for 5\nI1118 07:35:04.378475 3338 log.go:181] (0x4001032140) (5) Data frame handling\nI1118 07:35:04.379219 3338 log.go:181] (0x4001032140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1118 07:35:04.410235 3338 log.go:181] (0x40000360b0) Data frame received for 5\nI1118 07:35:04.410629 3338 log.go:181] (0x40000360b0) Data frame received for 3\nI1118 07:35:04.410784 3338 log.go:181] (0x4000234000) (3) Data frame handling\nI1118 07:35:04.410993 3338 log.go:181] (0x4001032140) (5) Data frame handling\nI1118 07:35:04.411303 3338 log.go:181] (0x4000234000) (3) Data frame sent\nI1118 07:35:04.411478 3338 log.go:181] (0x40000360b0) Data frame received for 3\nI1118 07:35:04.411629 3338 log.go:181] (0x4000234000) (3) Data frame handling\nI1118 07:35:04.412418 3338 log.go:181] (0x40000360b0) Data frame received for 1\nI1118 07:35:04.412622 3338 log.go:181] (0x4001032000) (1) Data frame handling\nI1118 07:35:04.413107 3338 log.go:181] (0x4001032000) (1) Data frame sent\nI1118 07:35:04.414345 3338 log.go:181] (0x40000360b0) (0x4001032000) Stream removed, broadcasting: 1\nI1118 07:35:04.417188 3338 log.go:181] (0x40000360b0) Go away received\nI1118 07:35:04.420747 3338 log.go:181] (0x40000360b0) (0x4001032000) Stream removed, broadcasting: 1\nI1118 07:35:04.421052 3338 log.go:181] (0x40000360b0) (0x4000234000) Stream removed, broadcasting: 3\nI1118 07:35:04.421228 3338 log.go:181] (0x40000360b0) (0x4001032140) Stream removed, broadcasting: 5\n" Nov 18 07:35:04.432: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Nov 18 07:35:04.433: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Nov 18 07:35:14.489: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Nov 18 07:35:24.556: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1350 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 18 07:35:26.146: INFO: stderr: "I1118 07:35:25.991454 3358 log.go:181] (0x400096e000) (0x4000966000) Create stream\nI1118 07:35:26.000001 3358 log.go:181] (0x400096e000) (0x4000966000) Stream added, broadcasting: 1\nI1118 07:35:26.020416 3358 log.go:181] (0x400096e000) Reply frame received for 1\nI1118 07:35:26.021285 3358 log.go:181] (0x400096e000) (0x4000b9e460) Create stream\nI1118 07:35:26.021373 3358 log.go:181] (0x400096e000) (0x4000b9e460) Stream added, broadcasting: 3\nI1118 07:35:26.022811 3358 log.go:181] (0x400096e000) Reply frame received for 3\nI1118 07:35:26.023125 3358 log.go:181] (0x400096e000) (0x4000b9ea00) Create stream\nI1118 07:35:26.023191 3358 log.go:181] (0x400096e000) (0x4000b9ea00) Stream added, broadcasting: 5\nI1118 07:35:26.024422 3358 log.go:181] (0x400096e000) Reply frame received for 5\nI1118 07:35:26.119494 3358 log.go:181] (0x400096e000) Data frame received for 3\nI1118 07:35:26.119778 3358 log.go:181] (0x400096e000) Data frame received for 1\nI1118 07:35:26.120131 3358 log.go:181] (0x4000b9e460) (3) Data frame handling\nI1118 07:35:26.120479 3358 log.go:181] (0x400096e000) Data frame received for 5\nI1118 07:35:26.120657 3358 log.go:181] (0x4000b9ea00) (5) Data frame handling\nI1118 07:35:26.120748 3358 log.go:181] (0x4000966000) (1) Data frame handling\nI1118 07:35:26.121942 3358 log.go:181] (0x4000b9e460) (3) Data frame sent\nI1118 07:35:26.123046 3358 log.go:181] (0x4000966000) (1) Data frame sent\nI1118 07:35:26.123406 3358 log.go:181] (0x4000b9ea00) (5) Data frame sent\nI1118 07:35:26.123531 3358 log.go:181] (0x400096e000) Data frame received for 3\nI1118 07:35:26.123649 3358 log.go:181] (0x4000b9e460) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI1118 07:35:26.123797 3358 log.go:181] (0x400096e000) Data frame received for 5\nI1118 07:35:26.123945 3358 log.go:181] (0x4000b9ea00) (5) Data frame handling\nI1118 07:35:26.125306 3358 log.go:181] (0x400096e000) (0x4000966000) Stream removed, broadcasting: 1\nI1118 07:35:26.128469 3358 log.go:181] (0x400096e000) Go away received\nI1118 07:35:26.133734 3358 log.go:181] (0x400096e000) (0x4000966000) Stream removed, broadcasting: 1\nI1118 07:35:26.134589 3358 log.go:181] (0x400096e000) (0x4000b9e460) Stream removed, broadcasting: 3\nI1118 07:35:26.134949 3358 log.go:181] (0x400096e000) (0x4000b9ea00) Stream removed, broadcasting: 5\n" Nov 18 07:35:26.146: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Nov 18 07:35:26.147: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Nov 18 07:35:36.190: INFO: Waiting for StatefulSet statefulset-1350/ss2 to complete update Nov 18 07:35:36.191: INFO: Waiting for Pod statefulset-1350/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Nov 18 07:35:36.191: INFO: Waiting for Pod statefulset-1350/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Nov 18 07:35:36.191: INFO: Waiting for Pod statefulset-1350/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Nov 18 07:35:46.230: INFO: Waiting for StatefulSet statefulset-1350/ss2 to complete update Nov 18 07:35:46.230: INFO: Waiting for Pod statefulset-1350/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Nov 18 07:35:46.230: INFO: Waiting for Pod statefulset-1350/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Nov 18 07:35:56.403: INFO: Waiting for StatefulSet statefulset-1350/ss2 to complete update STEP: Rolling back to a previous revision Nov 18 07:36:06.210: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1350 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 18 07:36:07.893: INFO: stderr: "I1118 07:36:07.722768 3378 log.go:181] (0x40001bec60) (0x400081c1e0) Create stream\nI1118 07:36:07.725161 3378 log.go:181] (0x40001bec60) (0x400081c1e0) Stream added, broadcasting: 1\nI1118 07:36:07.735777 3378 log.go:181] (0x40001bec60) Reply frame received for 1\nI1118 07:36:07.736672 3378 log.go:181] (0x40001bec60) (0x40007241e0) Create stream\nI1118 07:36:07.736754 3378 log.go:181] (0x40001bec60) (0x40007241e0) Stream added, broadcasting: 3\nI1118 07:36:07.738645 3378 log.go:181] (0x40001bec60) Reply frame received for 3\nI1118 07:36:07.739135 3378 log.go:181] (0x40001bec60) (0x400081c280) Create stream\nI1118 07:36:07.739250 3378 log.go:181] (0x40001bec60) (0x400081c280) Stream added, broadcasting: 5\nI1118 07:36:07.741357 3378 log.go:181] (0x40001bec60) Reply frame received for 5\nI1118 07:36:07.835479 3378 log.go:181] (0x40001bec60) Data frame received for 5\nI1118 07:36:07.835702 3378 log.go:181] (0x400081c280) (5) Data frame handling\nI1118 07:36:07.836217 3378 log.go:181] (0x400081c280) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1118 07:36:07.869329 3378 log.go:181] (0x40001bec60) Data frame received for 5\nI1118 07:36:07.869611 3378 log.go:181] (0x40001bec60) Data frame received for 3\nI1118 07:36:07.869837 3378 log.go:181] (0x40007241e0) (3) Data frame handling\nI1118 07:36:07.869938 3378 log.go:181] (0x400081c280) (5) Data frame handling\nI1118 07:36:07.870165 3378 log.go:181] (0x40007241e0) (3) Data frame sent\nI1118 07:36:07.870303 3378 log.go:181] (0x40001bec60) Data frame received for 3\nI1118 07:36:07.870392 3378 log.go:181] (0x40007241e0) (3) Data frame handling\nI1118 07:36:07.872077 3378 log.go:181] (0x40001bec60) Data frame received for 1\nI1118 07:36:07.872159 3378 log.go:181] (0x400081c1e0) (1) Data frame handling\nI1118 07:36:07.872240 3378 log.go:181] (0x400081c1e0) (1) Data frame sent\nI1118 07:36:07.873789 3378 log.go:181] (0x40001bec60) (0x400081c1e0) Stream removed, broadcasting: 1\nI1118 07:36:07.877144 3378 log.go:181] (0x40001bec60) Go away received\nI1118 07:36:07.881235 3378 log.go:181] (0x40001bec60) (0x400081c1e0) Stream removed, broadcasting: 1\nI1118 07:36:07.881887 3378 log.go:181] (0x40001bec60) (0x40007241e0) Stream removed, broadcasting: 3\nI1118 07:36:07.882441 3378 log.go:181] (0x40001bec60) (0x400081c280) Stream removed, broadcasting: 5\n" Nov 18 07:36:07.894: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Nov 18 07:36:07.894: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Nov 18 07:36:17.951: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Nov 18 07:36:28.032: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1350 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 18 07:36:29.652: INFO: stderr: "I1118 07:36:29.526192 3399 log.go:181] (0x4000b376b0) (0x40008a8640) Create stream\nI1118 07:36:29.530211 3399 log.go:181] (0x4000b376b0) (0x40008a8640) Stream added, broadcasting: 1\nI1118 07:36:29.550702 3399 log.go:181] (0x4000b376b0) Reply frame received for 1\nI1118 07:36:29.551271 3399 log.go:181] (0x4000b376b0) (0x400082c000) Create stream\nI1118 07:36:29.551328 3399 log.go:181] (0x4000b376b0) (0x400082c000) Stream added, broadcasting: 3\nI1118 07:36:29.552475 3399 log.go:181] (0x4000b376b0) Reply frame received for 3\nI1118 07:36:29.552680 3399 log.go:181] (0x4000b376b0) (0x400082c0a0) Create stream\nI1118 07:36:29.552734 3399 log.go:181] (0x4000b376b0) (0x400082c0a0) Stream added, broadcasting: 5\nI1118 07:36:29.553877 3399 log.go:181] (0x4000b376b0) Reply frame received for 5\nI1118 07:36:29.630023 3399 log.go:181] (0x4000b376b0) Data frame received for 5\nI1118 07:36:29.630321 3399 log.go:181] (0x4000b376b0) Data frame received for 3\nI1118 07:36:29.630562 3399 log.go:181] (0x4000b376b0) Data frame received for 1\nI1118 07:36:29.630804 3399 log.go:181] (0x400082c000) (3) Data frame handling\nI1118 07:36:29.631167 3399 log.go:181] (0x400082c0a0) (5) Data frame handling\nI1118 07:36:29.631369 3399 log.go:181] (0x40008a8640) (1) Data frame handling\nI1118 07:36:29.632058 3399 log.go:181] (0x400082c0a0) (5) Data frame sent\nI1118 07:36:29.632169 3399 log.go:181] (0x40008a8640) (1) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI1118 07:36:29.633133 3399 log.go:181] (0x4000b376b0) Data frame received for 5\nI1118 07:36:29.633225 3399 log.go:181] (0x400082c0a0) (5) Data frame handling\nI1118 07:36:29.633325 3399 log.go:181] (0x400082c000) (3) Data frame sent\nI1118 07:36:29.633429 3399 log.go:181] (0x4000b376b0) Data frame received for 3\nI1118 07:36:29.634246 3399 log.go:181] (0x4000b376b0) (0x40008a8640) Stream removed, broadcasting: 1\nI1118 07:36:29.636040 3399 log.go:181] (0x400082c000) (3) Data frame handling\nI1118 07:36:29.637242 3399 log.go:181] (0x4000b376b0) Go away received\nI1118 07:36:29.640660 3399 log.go:181] (0x4000b376b0) (0x40008a8640) Stream removed, broadcasting: 1\nI1118 07:36:29.641115 3399 log.go:181] (0x4000b376b0) (0x400082c000) Stream removed, broadcasting: 3\nI1118 07:36:29.641360 3399 log.go:181] (0x4000b376b0) (0x400082c0a0) Stream removed, broadcasting: 5\n" Nov 18 07:36:29.653: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Nov 18 07:36:29.653: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Nov 18 07:36:39.691: INFO: Waiting for StatefulSet statefulset-1350/ss2 to complete update Nov 18 07:36:39.691: INFO: Waiting for Pod statefulset-1350/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Nov 18 07:36:39.691: INFO: Waiting for Pod statefulset-1350/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Nov 18 07:36:39.692: INFO: Waiting for Pod statefulset-1350/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Nov 18 07:36:49.714: INFO: Waiting for StatefulSet statefulset-1350/ss2 to complete update Nov 18 07:36:49.714: INFO: Waiting for Pod statefulset-1350/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Nov 18 07:36:59.707: INFO: Deleting all statefulset in ns statefulset-1350 Nov 18 07:36:59.711: INFO: Scaling statefulset ss2 to 0 Nov 18 07:37:19.775: INFO: Waiting for statefulset status.replicas updated to 0 Nov 18 07:37:19.780: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:37:19.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1350" for this suite. • [SLOW TEST:160.298 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":303,"completed":219,"skipped":3541,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:37:19.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 18 07:37:22.253: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 18 07:37:24.273: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741281842, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741281842, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741281842, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741281842, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 18 07:37:26.291: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741281842, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741281842, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741281842, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741281842, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 18 07:37:29.361: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 18 07:37:29.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-276-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:37:30.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5856" for this suite. STEP: Destroying namespace "webhook-5856-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.874 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":303,"completed":220,"skipped":3562,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] PodTemplates should delete a collection of pod templates [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] PodTemplates /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:37:30.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of pod templates [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of pod templates Nov 18 07:37:30.843: INFO: created test-podtemplate-1 Nov 18 07:37:30.849: INFO: created test-podtemplate-2 Nov 18 07:37:30.871: INFO: created test-podtemplate-3 STEP: get a list of pod templates with a label in the current namespace STEP: delete collection of pod templates Nov 18 07:37:30.889: INFO: requesting DeleteCollection of pod templates STEP: check that the list of pod templates matches the requested quantity Nov 18 07:37:30.908: INFO: requesting list of pod templates to confirm quantity [AfterEach] [sig-node] PodTemplates /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:37:30.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-1649" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":303,"completed":221,"skipped":3583,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Events /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:37:30.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-api-machinery] Events /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:37:31.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-8646" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":303,"completed":222,"skipped":3612,"failed":0} SSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:37:31.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-70a7ee16-794e-4867-a7d6-984183bde3b6 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-70a7ee16-794e-4867-a7d6-984183bde3b6 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:37:40.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2109" for this suite. • [SLOW TEST:8.422 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":223,"skipped":3617,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:37:40.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-8951 STEP: creating service affinity-nodeport in namespace services-8951 STEP: creating replication controller affinity-nodeport in namespace services-8951 I1118 07:37:40.325580 10 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-8951, replica count: 3 I1118 07:37:43.376980 10 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1118 07:37:46.377698 10 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1118 07:37:49.378503 10 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 18 07:37:49.400: INFO: Creating new exec pod Nov 18 07:37:54.474: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-8951 execpod-affinityzz5jm -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport 80' Nov 18 07:37:56.083: INFO: stderr: "I1118 07:37:55.937255 3419 log.go:181] (0x40001ae000) (0x4000890820) Create stream\nI1118 07:37:55.944047 3419 log.go:181] (0x40001ae000) (0x4000890820) Stream added, broadcasting: 1\nI1118 07:37:55.958300 3419 log.go:181] (0x40001ae000) Reply frame received for 1\nI1118 07:37:55.959185 3419 log.go:181] (0x40001ae000) (0x400039c460) Create stream\nI1118 07:37:55.959332 3419 log.go:181] (0x40001ae000) (0x400039c460) Stream added, broadcasting: 3\nI1118 07:37:55.961094 3419 log.go:181] (0x40001ae000) Reply frame received for 3\nI1118 07:37:55.961367 3419 log.go:181] (0x40001ae000) (0x400081e0a0) Create stream\nI1118 07:37:55.961457 3419 log.go:181] (0x40001ae000) (0x400081e0a0) Stream added, broadcasting: 5\nI1118 07:37:55.962609 3419 log.go:181] (0x40001ae000) Reply frame received for 5\nI1118 07:37:56.043267 3419 log.go:181] (0x40001ae000) Data frame received for 3\nI1118 07:37:56.043594 3419 log.go:181] (0x40001ae000) Data frame received for 5\nI1118 07:37:56.043787 3419 log.go:181] (0x400039c460) (3) Data frame handling\nI1118 07:37:56.043976 3419 log.go:181] (0x400081e0a0) (5) Data frame handling\nI1118 07:37:56.045553 3419 log.go:181] (0x40001ae000) Data frame received for 1\nI1118 07:37:56.045655 3419 log.go:181] (0x4000890820) (1) Data frame handling\nI1118 07:37:56.046478 3419 log.go:181] (0x400081e0a0) (5) Data frame sent\nI1118 07:37:56.046821 3419 log.go:181] (0x40001ae000) Data frame received for 5\nI1118 07:37:56.046944 3419 log.go:181] (0x400081e0a0) (5) Data frame handling\nI1118 07:37:56.047023 3419 log.go:181] (0x4000890820) (1) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport 80\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\nI1118 07:37:56.053633 3419 log.go:181] (0x40001ae000) (0x4000890820) Stream removed, broadcasting: 1\nI1118 07:37:56.071086 3419 log.go:181] (0x40001ae000) Go away received\nI1118 07:37:56.074186 3419 log.go:181] (0x40001ae000) (0x4000890820) Stream removed, broadcasting: 1\nI1118 07:37:56.075008 3419 log.go:181] (0x40001ae000) (0x400039c460) Stream removed, broadcasting: 3\nI1118 07:37:56.075673 3419 log.go:181] (0x40001ae000) (0x400081e0a0) Stream removed, broadcasting: 5\n" Nov 18 07:37:56.085: INFO: stdout: "" Nov 18 07:37:56.090: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-8951 execpod-affinityzz5jm -- /bin/sh -x -c nc -zv -t -w 2 10.106.201.144 80' Nov 18 07:37:57.694: INFO: stderr: "I1118 07:37:57.563953 3439 log.go:181] (0x4000b52840) (0x4000478460) Create stream\nI1118 07:37:57.569672 3439 log.go:181] (0x4000b52840) (0x4000478460) Stream added, broadcasting: 1\nI1118 07:37:57.583268 3439 log.go:181] (0x4000b52840) Reply frame received for 1\nI1118 07:37:57.584422 3439 log.go:181] (0x4000b52840) (0x4000478500) Create stream\nI1118 07:37:57.584522 3439 log.go:181] (0x4000b52840) (0x4000478500) Stream added, broadcasting: 3\nI1118 07:37:57.586440 3439 log.go:181] (0x4000b52840) Reply frame received for 3\nI1118 07:37:57.586849 3439 log.go:181] (0x4000b52840) (0x40006b6000) Create stream\nI1118 07:37:57.586935 3439 log.go:181] (0x4000b52840) (0x40006b6000) Stream added, broadcasting: 5\nI1118 07:37:57.588211 3439 log.go:181] (0x4000b52840) Reply frame received for 5\nI1118 07:37:57.671129 3439 log.go:181] (0x4000b52840) Data frame received for 3\nI1118 07:37:57.671613 3439 log.go:181] (0x4000478500) (3) Data frame handling\nI1118 07:37:57.672347 3439 log.go:181] (0x4000b52840) Data frame received for 1\nI1118 07:37:57.672458 3439 log.go:181] (0x4000478460) (1) Data frame handling\nI1118 07:37:57.672752 3439 log.go:181] (0x4000b52840) Data frame received for 5\nI1118 07:37:57.673139 3439 log.go:181] (0x40006b6000) (5) Data frame handling\nI1118 07:37:57.673355 3439 log.go:181] (0x40006b6000) (5) Data frame sent\nI1118 07:37:57.673670 3439 log.go:181] (0x4000478460) (1) Data frame sent\nI1118 07:37:57.674819 3439 log.go:181] (0x4000b52840) Data frame received for 5\nI1118 07:37:57.674898 3439 log.go:181] (0x40006b6000) (5) Data frame handling\nI1118 07:37:57.675206 3439 log.go:181] (0x4000b52840) (0x4000478460) Stream removed, broadcasting: 1\n+ nc -zv -t -w 2 10.106.201.144 80\nConnection to 10.106.201.144 80 port [tcp/http] succeeded!\nI1118 07:37:57.679057 3439 log.go:181] (0x4000b52840) Go away received\nI1118 07:37:57.682545 3439 log.go:181] (0x4000b52840) (0x4000478460) Stream removed, broadcasting: 1\nI1118 07:37:57.682827 3439 log.go:181] (0x4000b52840) (0x4000478500) Stream removed, broadcasting: 3\nI1118 07:37:57.683027 3439 log.go:181] (0x4000b52840) (0x40006b6000) Stream removed, broadcasting: 5\n" Nov 18 07:37:57.695: INFO: stdout: "" Nov 18 07:37:57.695: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-8951 execpod-affinityzz5jm -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.18 32014' Nov 18 07:37:59.358: INFO: stderr: "I1118 07:37:59.225816 3459 log.go:181] (0x4001020e70) (0x4001018500) Create stream\nI1118 07:37:59.228912 3459 log.go:181] (0x4001020e70) (0x4001018500) Stream added, broadcasting: 1\nI1118 07:37:59.248821 3459 log.go:181] (0x4001020e70) Reply frame received for 1\nI1118 07:37:59.249476 3459 log.go:181] (0x4001020e70) (0x4000afc000) Create stream\nI1118 07:37:59.249537 3459 log.go:181] (0x4001020e70) (0x4000afc000) Stream added, broadcasting: 3\nI1118 07:37:59.250893 3459 log.go:181] (0x4001020e70) Reply frame received for 3\nI1118 07:37:59.251189 3459 log.go:181] (0x4001020e70) (0x4000c12000) Create stream\nI1118 07:37:59.251265 3459 log.go:181] (0x4001020e70) (0x4000c12000) Stream added, broadcasting: 5\nI1118 07:37:59.252811 3459 log.go:181] (0x4001020e70) Reply frame received for 5\nI1118 07:37:59.336982 3459 log.go:181] (0x4001020e70) Data frame received for 3\nI1118 07:37:59.337409 3459 log.go:181] (0x4001020e70) Data frame received for 1\nI1118 07:37:59.337686 3459 log.go:181] (0x4000afc000) (3) Data frame handling\nI1118 07:37:59.337826 3459 log.go:181] (0x4001020e70) Data frame received for 5\nI1118 07:37:59.337964 3459 log.go:181] (0x4000c12000) (5) Data frame handling\nI1118 07:37:59.338360 3459 log.go:181] (0x4001018500) (1) Data frame handling\nI1118 07:37:59.340642 3459 log.go:181] (0x4000c12000) (5) Data frame sent\nI1118 07:37:59.340979 3459 log.go:181] (0x4001018500) (1) Data frame sent\nI1118 07:37:59.341170 3459 log.go:181] (0x4001020e70) Data frame received for 5\n+ nc -zv -t -w 2 172.18.0.18 32014\nConnection to 172.18.0.18 32014 port [tcp/32014] succeeded!\nI1118 07:37:59.341267 3459 log.go:181] (0x4000c12000) (5) Data frame handling\nI1118 07:37:59.342505 3459 log.go:181] (0x4001020e70) (0x4001018500) Stream removed, broadcasting: 1\nI1118 07:37:59.345968 3459 log.go:181] (0x4001020e70) Go away received\nI1118 07:37:59.348646 3459 log.go:181] (0x4001020e70) (0x4001018500) Stream removed, broadcasting: 1\nI1118 07:37:59.349070 3459 log.go:181] (0x4001020e70) (0x4000afc000) Stream removed, broadcasting: 3\nI1118 07:37:59.349306 3459 log.go:181] (0x4001020e70) (0x4000c12000) Stream removed, broadcasting: 5\n" Nov 18 07:37:59.359: INFO: stdout: "" Nov 18 07:37:59.360: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-8951 execpod-affinityzz5jm -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.17 32014' Nov 18 07:38:00.989: INFO: stderr: "I1118 07:38:00.866847 3479 log.go:181] (0x4000c8a000) (0x40005a0280) Create stream\nI1118 07:38:00.870461 3479 log.go:181] (0x4000c8a000) (0x40005a0280) Stream added, broadcasting: 1\nI1118 07:38:00.883940 3479 log.go:181] (0x4000c8a000) Reply frame received for 1\nI1118 07:38:00.885004 3479 log.go:181] (0x4000c8a000) (0x4000d82000) Create stream\nI1118 07:38:00.885144 3479 log.go:181] (0x4000c8a000) (0x4000d82000) Stream added, broadcasting: 3\nI1118 07:38:00.887106 3479 log.go:181] (0x4000c8a000) Reply frame received for 3\nI1118 07:38:00.887565 3479 log.go:181] (0x4000c8a000) (0x4000d820a0) Create stream\nI1118 07:38:00.887667 3479 log.go:181] (0x4000c8a000) (0x4000d820a0) Stream added, broadcasting: 5\nI1118 07:38:00.889115 3479 log.go:181] (0x4000c8a000) Reply frame received for 5\nI1118 07:38:00.969511 3479 log.go:181] (0x4000c8a000) Data frame received for 3\nI1118 07:38:00.969741 3479 log.go:181] (0x4000c8a000) Data frame received for 1\nI1118 07:38:00.969966 3479 log.go:181] (0x4000c8a000) Data frame received for 5\nI1118 07:38:00.970188 3479 log.go:181] (0x4000d820a0) (5) Data frame handling\nI1118 07:38:00.970311 3479 log.go:181] (0x4000d82000) (3) Data frame handling\nI1118 07:38:00.970561 3479 log.go:181] (0x40005a0280) (1) Data frame handling\n+ nc -zv -t -w 2 172.18.0.17 32014\nConnection to 172.18.0.17 32014 port [tcp/32014] succeeded!\nI1118 07:38:00.971910 3479 log.go:181] (0x40005a0280) (1) Data frame sent\nI1118 07:38:00.972249 3479 log.go:181] (0x4000d820a0) (5) Data frame sent\nI1118 07:38:00.974024 3479 log.go:181] (0x4000c8a000) Data frame received for 5\nI1118 07:38:00.974087 3479 log.go:181] (0x4000d820a0) (5) Data frame handling\nI1118 07:38:00.975280 3479 log.go:181] (0x4000c8a000) (0x40005a0280) Stream removed, broadcasting: 1\nI1118 07:38:00.977896 3479 log.go:181] (0x4000c8a000) Go away received\nI1118 07:38:00.980688 3479 log.go:181] (0x4000c8a000) (0x40005a0280) Stream removed, broadcasting: 1\nI1118 07:38:00.981278 3479 log.go:181] (0x4000c8a000) (0x4000d82000) Stream removed, broadcasting: 3\nI1118 07:38:00.981449 3479 log.go:181] (0x4000c8a000) (0x4000d820a0) Stream removed, broadcasting: 5\n" Nov 18 07:38:00.990: INFO: stdout: "" Nov 18 07:38:00.990: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-8951 execpod-affinityzz5jm -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.18:32014/ ; done' Nov 18 07:38:02.621: INFO: stderr: "I1118 07:38:02.426437 3500 log.go:181] (0x4000135a20) (0x40005466e0) Create stream\nI1118 07:38:02.431620 3500 log.go:181] (0x4000135a20) (0x40005466e0) Stream added, broadcasting: 1\nI1118 07:38:02.446250 3500 log.go:181] (0x4000135a20) Reply frame received for 1\nI1118 07:38:02.446819 3500 log.go:181] (0x4000135a20) (0x40005b2000) Create stream\nI1118 07:38:02.446895 3500 log.go:181] (0x4000135a20) (0x40005b2000) Stream added, broadcasting: 3\nI1118 07:38:02.448635 3500 log.go:181] (0x4000135a20) Reply frame received for 3\nI1118 07:38:02.449129 3500 log.go:181] (0x4000135a20) (0x40006aa000) Create stream\nI1118 07:38:02.449228 3500 log.go:181] (0x4000135a20) (0x40006aa000) Stream added, broadcasting: 5\nI1118 07:38:02.451090 3500 log.go:181] (0x4000135a20) Reply frame received for 5\nI1118 07:38:02.530502 3500 log.go:181] (0x4000135a20) Data frame received for 5\nI1118 07:38:02.530903 3500 log.go:181] (0x4000135a20) Data frame received for 3\nI1118 07:38:02.531026 3500 log.go:181] (0x40005b2000) (3) Data frame handling\nI1118 07:38:02.531137 3500 log.go:181] (0x40006aa000) (5) Data frame handling\nI1118 07:38:02.531727 3500 log.go:181] (0x40005b2000) (3) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:32014/\nI1118 07:38:02.532520 3500 log.go:181] (0x40006aa000) (5) Data frame sent\nI1118 07:38:02.532629 3500 log.go:181] (0x4000135a20) Data frame received for 3\nI1118 07:38:02.532741 3500 log.go:181] (0x40005b2000) (3) Data frame handling\nI1118 07:38:02.532924 3500 log.go:181] (0x40005b2000) (3) Data frame sent\nI1118 07:38:02.533231 3500 log.go:181] (0x4000135a20) Data frame received for 5\nI1118 07:38:02.533358 3500 log.go:181] (0x40006aa000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:32014/\nI1118 07:38:02.533458 3500 log.go:181] (0x4000135a20) Data frame received for 3\nI1118 07:38:02.533611 3500 log.go:181] (0x40005b2000) (3) Data frame handling\nI1118 07:38:02.533743 3500 log.go:181] (0x40005b2000) (3) Data frame sent\nI1118 07:38:02.533883 3500 log.go:181] (0x40006aa000) (5) Data frame sent\nI1118 07:38:02.538603 3500 log.go:181] (0x4000135a20) Data frame received for 3\nI1118 07:38:02.538702 3500 log.go:181] (0x40005b2000) (3) Data frame handling\nI1118 07:38:02.538810 3500 log.go:181] (0x40005b2000) (3) Data frame sent\nI1118 07:38:02.539361 3500 log.go:181] (0x4000135a20) Data frame received for 3\nI1118 07:38:02.539447 3500 log.go:181] (0x4000135a20) Data frame received for 5\nI1118 07:38:02.539571 3500 log.go:181] (0x40006aa000) (5) Data frame handling\nI1118 07:38:02.539666 3500 log.go:181] (0x40006aa000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:32014/\nI1118 07:38:02.539762 3500 log.go:181] (0x40005b2000) (3) Data frame handling\nI1118 07:38:02.539848 3500 log.go:181] (0x40005b2000) (3) Data frame sent\nI1118 07:38:02.543334 3500 log.go:181] (0x4000135a20) Data frame received for 3\nI1118 07:38:02.543391 3500 log.go:181] (0x40005b2000) (3) Data frame handling\nI1118 07:38:02.543471 3500 log.go:181] (0x40005b2000) (3) Data frame sent\nI1118 07:38:02.544015 3500 log.go:181] (0x4000135a20) Data frame received for 3\nI1118 07:38:02.544087 3500 log.go:181] (0x40005b2000) (3) Data frame handling\nI1118 07:38:02.544982 3500 log.go:181] (0x4000135a20) Data frame received for 5\nI1118 07:38:02.549934 3500 log.go:181] (0x40006aa000) (5) Data frame handling\nI1118 07:38:02.550367 3500 log.go:181] (0x40005b2000) (3) Data frame sent\nI1118 07:38:02.551935 3500 log.go:181] (0x40006aa000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:32014/\nI1118 07:38:02.556552 3500 log.go:181] (0x4000135a20) Data frame received for 5\nI1118 07:38:02.556632 3500 log.go:181] (0x40006aa000) (5) Data frame handling\nI1118 07:38:02.556712 3500 log.go:181] (0x40006aa000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:32014/\nI1118 07:38:02.556830 3500 log.go:181] (0x4000135a20) Data frame received for 3\nI1118 07:38:02.556982 3500 log.go:181] (0x40005b2000) (3) Data frame handling\nI1118 07:38:02.557075 3500 log.go:181] (0x40005b2000) (3) Data frame sent\nI1118 07:38:02.557185 3500 log.go:181] (0x4000135a20) Data frame received for 3\nI1118 07:38:02.557254 3500 log.go:181] (0x40005b2000) (3) Data frame handling\nI1118 07:38:02.557326 3500 log.go:181] (0x40005b2000) (3) Data frame sent\nI1118 07:38:02.557387 3500 log.go:181] (0x4000135a20) Data frame received for 3\nI1118 07:38:02.557441 3500 log.go:181] (0x40005b2000) (3) Data frame handling\nI1118 07:38:02.557507 3500 log.go:181] (0x40005b2000) (3) Data frame sent\nI1118 07:38:02.558041 3500 log.go:181] (0x4000135a20) Data frame received for 5\nI1118 07:38:02.558116 3500 log.go:181] (0x40006aa000) (5) Data frame handling\nI1118 07:38:02.558191 3500 log.go:181] (0x40006aa000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:32014/\nI1118 07:38:02.558475 3500 log.go:181] (0x4000135a20) Data frame received for 3\nI1118 07:38:02.558536 3500 log.go:181] (0x40005b2000) (3) Data frame handling\nI1118 07:38:02.558604 3500 log.go:181] (0x40005b2000) (3) Data frame sent\nI1118 07:38:02.560791 3500 log.go:181] (0x4000135a20) Data frame received for 3\nI1118 07:38:02.560955 3500 log.go:181] (0x40005b2000) (3) Data frame handling\nI1118 07:38:02.561027 3500 log.go:181] (0x40005b2000) (3) Data frame sent\nI1118 07:38:02.561481 3500 log.go:181] (0x4000135a20) Data frame received for 5\nI1118 07:38:02.561575 3500 log.go:181] (0x40006aa000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:32014/\nI1118 07:38:02.561659 3500 log.go:181] (0x4000135a20) Data frame received for 3\nI1118 07:38:02.561746 3500 log.go:181] (0x40005b2000) (3) Data frame handling\nI1118 07:38:02.561822 3500 log.go:181] (0x40005b2000) (3) Data frame sent\nI1118 07:38:02.561879 3500 log.go:181] (0x40006aa000) (5) Data frame sent\nI1118 07:38:02.564894 3500 log.go:181] (0x4000135a20) Data frame received for 3\nI1118 07:38:02.564978 3500 log.go:181] (0x40005b2000) (3) Data frame handling\nI1118 07:38:02.565067 3500 log.go:181] (0x40005b2000) (3) Data frame sent\nI1118 07:38:02.565268 3500 log.go:181] (0x4000135a20) Data frame received for 5\nI1118 07:38:02.565335 3500 log.go:181] (0x40006aa000) (5) Data frame handling\nI1118 07:38:02.565416 3500 log.go:181] (0x40006aa000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:32014/\nI1118 07:38:02.565507 3500 log.go:181] (0x4000135a20) Data frame received for 3\nI1118 07:38:02.565591 3500 log.go:181] (0x40005b2000) (3) Data frame handling\nI1118 07:38:02.565663 3500 log.go:181] (0x40005b2000) (3) Data frame sent\nI1118 07:38:02.568540 3500 log.go:181] (0x4000135a20) Data frame received for 3\nI1118 07:38:02.568611 3500 log.go:181] (0x40005b2000) (3) Data frame handling\nI1118 07:38:02.568683 3500 log.go:181] (0x40005b2000) (3) Data frame sent\nI1118 07:38:02.568794 3500 log.go:181] (0x4000135a20) Data frame received for 5\nI1118 07:38:02.568942 3500 log.go:181] (0x40006aa000) (5) Data frame handling\nI1118 07:38:02.569015 3500 log.go:181] (0x40006aa000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:32014/\nI1118 07:38:02.569080 3500 log.go:181] (0x4000135a20) Data frame received for 3\nI1118 07:38:02.569136 3500 log.go:181] (0x40005b2000) (3) Data frame handling\nI1118 07:38:02.569240 3500 log.go:181] (0x40005b2000) (3) Data frame sent\nI1118 07:38:02.573149 3500 log.go:181] (0x4000135a20) Data frame received for 3\nI1118 07:38:02.573250 3500 log.go:181] (0x40005b2000) (3) Data frame handling\nI1118 07:38:02.573315 3500 log.go:181] (0x4000135a20) Data frame received for 5\nI1118 07:38:02.573388 3500 log.go:181] (0x40006aa000) (5) Data frame handling\nI1118 07:38:02.573442 3500 log.go:181] (0x40006aa000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:32014/\nI1118 07:38:02.573500 3500 log.go:181] (0x40005b2000) (3) Data frame sent\nI1118 07:38:02.573581 3500 log.go:181] (0x4000135a20) Data frame received for 3\nI1118 07:38:02.573643 3500 log.go:181] (0x40005b2000) (3) Data frame handling\nI1118 07:38:02.573718 3500 log.go:181] (0x40005b2000) (3) Data frame sent\nI1118 07:38:02.576689 3500 log.go:181] (0x4000135a20) Data frame received for 3\nI1118 07:38:02.576787 3500 log.go:181] (0x40005b2000) (3) Data frame handling\nI1118 07:38:02.576967 3500 log.go:181] (0x40005b2000) (3) Data frame sent\nI1118 07:38:02.577177 3500 log.go:181] (0x4000135a20) Data frame received for 3\nI1118 07:38:02.577294 3500 log.go:181] (0x40005b2000) (3) Data frame handling\nI1118 07:38:02.577380 3500 log.go:181] (0x4000135a20) Data frame received for 5\nI1118 07:38:02.577467 3500 log.go:181] (0x40006aa000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2I1118 07:38:02.577550 3500 log.go:181] (0x40005b2000) (3) Data frame sent\nI1118 07:38:02.577650 3500 log.go:181] (0x40006aa000) (5) Data frame sent\nI1118 07:38:02.577736 3500 log.go:181] (0x4000135a20) Data frame received for 5\nI1118 07:38:02.577815 3500 log.go:181] (0x40006aa000) (5) Data frame handling\nI1118 07:38:02.577920 3500 log.go:181] (0x40006aa000) (5) Data frame sent\n http://172.18.0.18:32014/\nI1118 07:38:02.580334 3500 log.go:181] (0x4000135a20) Data frame received for 3\nI1118 07:38:02.580387 3500 log.go:181] (0x40005b2000) (3) Data frame handling\nI1118 07:38:02.580442 3500 log.go:181] (0x40005b2000) (3) Data frame sent\nI1118 07:38:02.581149 3500 log.go:181] (0x4000135a20) Data frame received for 3\nI1118 07:38:02.581226 3500 log.go:181] (0x40005b2000) (3) Data frame handling\nI1118 07:38:02.581290 3500 log.go:181] (0x40005b2000) (3) Data frame sent\nI1118 07:38:02.581355 3500 log.go:181] (0x4000135a20) Data frame received for 5\nI1118 07:38:02.581411 3500 log.go:181] (0x40006aa000) (5) Data frame handling\nI1118 07:38:02.581475 3500 log.go:181] (0x40006aa000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:32014/\nI1118 07:38:02.584141 3500 log.go:181] (0x4000135a20) Data frame received for 3\nI1118 07:38:02.584216 3500 log.go:181] (0x40005b2000) (3) Data frame handling\nI1118 07:38:02.584283 3500 log.go:181] (0x40005b2000) (3) Data frame sent\nI1118 07:38:02.584353 3500 log.go:181] (0x4000135a20) Data frame received for 5\nI1118 07:38:02.584451 3500 log.go:181] (0x40006aa000) (5) Data frame handling\nI1118 07:38:02.584535 3500 log.go:181] (0x40006aa000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeoutI1118 07:38:02.584607 3500 log.go:181] (0x4000135a20) Data frame received for 5\nI1118 07:38:02.584664 3500 log.go:181] (0x4000135a20) Data frame received for 3\nI1118 07:38:02.584754 3500 log.go:181] (0x40005b2000) (3) Data frame handling\nI1118 07:38:02.584823 3500 log.go:181] (0x40005b2000) (3) Data frame sent\nI1118 07:38:02.584972 3500 log.go:181] (0x40006aa000) (5) Data frame handling\n 2 http://172.18.0.18:32014/\nI1118 07:38:02.585127 3500 log.go:181] (0x40006aa000) (5) Data frame sent\nI1118 07:38:02.587977 3500 log.go:181] (0x4000135a20) Data frame received for 3\nI1118 07:38:02.588088 3500 log.go:181] (0x40005b2000) (3) Data frame handling\nI1118 07:38:02.588220 3500 log.go:181] (0x40005b2000) (3) Data frame sent\nI1118 07:38:02.588641 3500 log.go:181] (0x4000135a20) Data frame received for 3\nI1118 07:38:02.588736 3500 log.go:181] (0x40005b2000) (3) Data frame handling\nI1118 07:38:02.588824 3500 log.go:181] (0x40005b2000) (3) Data frame sent\nI1118 07:38:02.589069 3500 log.go:181] (0x4000135a20) Data frame received for 5\nI1118 07:38:02.589131 3500 log.go:181] (0x40006aa000) (5) Data frame handling\nI1118 07:38:02.589204 3500 log.go:181] (0x40006aa000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:32014/\nI1118 07:38:02.593074 3500 log.go:181] (0x4000135a20) Data frame received for 3\nI1118 07:38:02.593170 3500 log.go:181] (0x40005b2000) (3) Data frame handling\nI1118 07:38:02.593305 3500 log.go:181] (0x40005b2000) (3) Data frame sent\nI1118 07:38:02.593660 3500 log.go:181] (0x4000135a20) Data frame received for 3\nI1118 07:38:02.593737 3500 log.go:181] (0x40005b2000) (3) Data frame handling\nI1118 07:38:02.593826 3500 log.go:181] (0x4000135a20) Data frame received for 5\nI1118 07:38:02.593956 3500 log.go:181] (0x40006aa000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeoutI1118 07:38:02.594071 3500 log.go:181] (0x40005b2000) (3) Data frame sent\nI1118 07:38:02.594164 3500 log.go:181] (0x40006aa000) (5) Data frame sent\nI1118 07:38:02.594239 3500 log.go:181] (0x4000135a20) Data frame received for 5\nI1118 07:38:02.594308 3500 log.go:181] (0x40006aa000) (5) Data frame handling\nI1118 07:38:02.594406 3500 log.go:181] (0x40006aa000) (5) Data frame sent\n 2 http://172.18.0.18:32014/\nI1118 07:38:02.597208 3500 log.go:181] (0x4000135a20) Data frame received for 3\nI1118 07:38:02.597297 3500 log.go:181] (0x40005b2000) (3) Data frame handling\nI1118 07:38:02.597393 3500 log.go:181] (0x40005b2000) (3) Data frame sent\nI1118 07:38:02.597748 3500 log.go:181] (0x4000135a20) Data frame received for 3\nI1118 07:38:02.597805 3500 log.go:181] (0x40005b2000) (3) Data frame handling\nI1118 07:38:02.597857 3500 log.go:181] (0x40005b2000) (3) Data frame sent\nI1118 07:38:02.597905 3500 log.go:181] (0x4000135a20) Data frame received for 5\nI1118 07:38:02.597952 3500 log.go:181] (0x40006aa000) (5) Data frame handling\nI1118 07:38:02.598008 3500 log.go:181] (0x40006aa000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:32014/\nI1118 07:38:02.601957 3500 log.go:181] (0x4000135a20) Data frame received for 3\nI1118 07:38:02.602060 3500 log.go:181] (0x40005b2000) (3) Data frame handling\nI1118 07:38:02.602175 3500 log.go:181] (0x40005b2000) (3) Data frame sent\nI1118 07:38:02.602596 3500 log.go:181] (0x4000135a20) Data frame received for 3\nI1118 07:38:02.602721 3500 log.go:181] (0x40005b2000) (3) Data frame handling\nI1118 07:38:02.602809 3500 log.go:181] (0x4000135a20) Data frame received for 5\nI1118 07:38:02.602901 3500 log.go:181] (0x40006aa000) (5) Data frame handling\nI1118 07:38:02.604395 3500 log.go:181] (0x4000135a20) Data frame received for 1\nI1118 07:38:02.604477 3500 log.go:181] (0x40005466e0) (1) Data frame handling\nI1118 07:38:02.604558 3500 log.go:181] (0x40005466e0) (1) Data frame sent\nI1118 07:38:02.605291 3500 log.go:181] (0x4000135a20) (0x40005466e0) Stream removed, broadcasting: 1\nI1118 07:38:02.607936 3500 log.go:181] (0x4000135a20) Go away received\nI1118 07:38:02.610725 3500 log.go:181] (0x4000135a20) (0x40005466e0) Stream removed, broadcasting: 1\nI1118 07:38:02.611041 3500 log.go:181] (0x4000135a20) (0x40005b2000) Stream removed, broadcasting: 3\nI1118 07:38:02.611248 3500 log.go:181] (0x4000135a20) (0x40006aa000) Stream removed, broadcasting: 5\n" Nov 18 07:38:02.628: INFO: stdout: "\naffinity-nodeport-2hthg\naffinity-nodeport-2hthg\naffinity-nodeport-2hthg\naffinity-nodeport-2hthg\naffinity-nodeport-2hthg\naffinity-nodeport-2hthg\naffinity-nodeport-2hthg\naffinity-nodeport-2hthg\naffinity-nodeport-2hthg\naffinity-nodeport-2hthg\naffinity-nodeport-2hthg\naffinity-nodeport-2hthg\naffinity-nodeport-2hthg\naffinity-nodeport-2hthg\naffinity-nodeport-2hthg\naffinity-nodeport-2hthg" Nov 18 07:38:02.629: INFO: Received response from host: affinity-nodeport-2hthg Nov 18 07:38:02.629: INFO: Received response from host: affinity-nodeport-2hthg Nov 18 07:38:02.629: INFO: Received response from host: affinity-nodeport-2hthg Nov 18 07:38:02.629: INFO: Received response from host: affinity-nodeport-2hthg Nov 18 07:38:02.629: INFO: Received response from host: affinity-nodeport-2hthg Nov 18 07:38:02.629: INFO: Received response from host: affinity-nodeport-2hthg Nov 18 07:38:02.629: INFO: Received response from host: affinity-nodeport-2hthg Nov 18 07:38:02.629: INFO: Received response from host: affinity-nodeport-2hthg Nov 18 07:38:02.629: INFO: Received response from host: affinity-nodeport-2hthg Nov 18 07:38:02.629: INFO: Received response from host: affinity-nodeport-2hthg Nov 18 07:38:02.629: INFO: Received response from host: affinity-nodeport-2hthg Nov 18 07:38:02.629: INFO: Received response from host: affinity-nodeport-2hthg Nov 18 07:38:02.629: INFO: Received response from host: affinity-nodeport-2hthg Nov 18 07:38:02.629: INFO: Received response from host: affinity-nodeport-2hthg Nov 18 07:38:02.629: INFO: Received response from host: affinity-nodeport-2hthg Nov 18 07:38:02.629: INFO: Received response from host: affinity-nodeport-2hthg Nov 18 07:38:02.629: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-8951, will wait for the garbage collector to delete the pods Nov 18 07:38:02.779: INFO: Deleting ReplicationController affinity-nodeport took: 9.618011ms Nov 18 07:38:03.381: INFO: Terminating ReplicationController affinity-nodeport pods took: 601.49632ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:38:19.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8951" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:39.514 seconds] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":303,"completed":224,"skipped":3628,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:38:19.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD Nov 18 07:38:19.745: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:40:04.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8839" for this suite. • [SLOW TEST:105.067 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":303,"completed":225,"skipped":3683,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:40:04.732: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Nov 18 07:40:04.809: INFO: Waiting up to 5m0s for pod "downward-api-510e270f-5de1-4024-98df-cdc7f0a91e3e" in namespace "downward-api-4175" to be "Succeeded or Failed" Nov 18 07:40:04.824: INFO: Pod "downward-api-510e270f-5de1-4024-98df-cdc7f0a91e3e": Phase="Pending", Reason="", readiness=false. Elapsed: 14.779627ms Nov 18 07:40:06.830: INFO: Pod "downward-api-510e270f-5de1-4024-98df-cdc7f0a91e3e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021168266s Nov 18 07:40:08.838: INFO: Pod "downward-api-510e270f-5de1-4024-98df-cdc7f0a91e3e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029389078s STEP: Saw pod success Nov 18 07:40:08.838: INFO: Pod "downward-api-510e270f-5de1-4024-98df-cdc7f0a91e3e" satisfied condition "Succeeded or Failed" Nov 18 07:40:08.844: INFO: Trying to get logs from node leguer-worker2 pod downward-api-510e270f-5de1-4024-98df-cdc7f0a91e3e container dapi-container: STEP: delete the pod Nov 18 07:40:08.900: INFO: Waiting for pod downward-api-510e270f-5de1-4024-98df-cdc7f0a91e3e to disappear Nov 18 07:40:08.915: INFO: Pod downward-api-510e270f-5de1-4024-98df-cdc7f0a91e3e no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:40:08.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4175" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":303,"completed":226,"skipped":3702,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:40:08.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-e1eca188-27f5-47f7-819e-5ddb914f92de STEP: Creating secret with name s-test-opt-upd-a5abcba7-4a7e-48a9-98b4-c95d79999a7d STEP: Creating the pod STEP: Deleting secret s-test-opt-del-e1eca188-27f5-47f7-819e-5ddb914f92de STEP: Updating secret s-test-opt-upd-a5abcba7-4a7e-48a9-98b4-c95d79999a7d STEP: Creating secret with name s-test-opt-create-d4b5f391-083a-4f5c-b1ac-2e71c4b4779e STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:40:17.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1515" for this suite. • [SLOW TEST:8.434 seconds] [sig-storage] Projected secret /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":227,"skipped":3725,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:40:17.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-6hn9 STEP: Creating a pod to test atomic-volume-subpath Nov 18 07:40:17.507: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-6hn9" in namespace "subpath-5305" to be "Succeeded or Failed" Nov 18 07:40:17.534: INFO: Pod "pod-subpath-test-configmap-6hn9": Phase="Pending", Reason="", readiness=false. Elapsed: 26.711561ms Nov 18 07:40:19.573: INFO: Pod "pod-subpath-test-configmap-6hn9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066353537s Nov 18 07:40:21.580: INFO: Pod "pod-subpath-test-configmap-6hn9": Phase="Running", Reason="", readiness=true. Elapsed: 4.072780789s Nov 18 07:40:23.586: INFO: Pod "pod-subpath-test-configmap-6hn9": Phase="Running", Reason="", readiness=true. Elapsed: 6.078632228s Nov 18 07:40:25.608: INFO: Pod "pod-subpath-test-configmap-6hn9": Phase="Running", Reason="", readiness=true. Elapsed: 8.10053841s Nov 18 07:40:27.615: INFO: Pod "pod-subpath-test-configmap-6hn9": Phase="Running", Reason="", readiness=true. Elapsed: 10.107915572s Nov 18 07:40:29.650: INFO: Pod "pod-subpath-test-configmap-6hn9": Phase="Running", Reason="", readiness=true. Elapsed: 12.142537441s Nov 18 07:40:31.662: INFO: Pod "pod-subpath-test-configmap-6hn9": Phase="Running", Reason="", readiness=true. Elapsed: 14.154917291s Nov 18 07:40:33.670: INFO: Pod "pod-subpath-test-configmap-6hn9": Phase="Running", Reason="", readiness=true. Elapsed: 16.162912328s Nov 18 07:40:35.678: INFO: Pod "pod-subpath-test-configmap-6hn9": Phase="Running", Reason="", readiness=true. Elapsed: 18.170674025s Nov 18 07:40:37.704: INFO: Pod "pod-subpath-test-configmap-6hn9": Phase="Running", Reason="", readiness=true. Elapsed: 20.197346187s Nov 18 07:40:39.712: INFO: Pod "pod-subpath-test-configmap-6hn9": Phase="Running", Reason="", readiness=true. Elapsed: 22.204893999s Nov 18 07:40:41.720: INFO: Pod "pod-subpath-test-configmap-6hn9": Phase="Running", Reason="", readiness=true. Elapsed: 24.212788872s Nov 18 07:40:43.727: INFO: Pod "pod-subpath-test-configmap-6hn9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.219695403s STEP: Saw pod success Nov 18 07:40:43.727: INFO: Pod "pod-subpath-test-configmap-6hn9" satisfied condition "Succeeded or Failed" Nov 18 07:40:43.734: INFO: Trying to get logs from node leguer-worker pod pod-subpath-test-configmap-6hn9 container test-container-subpath-configmap-6hn9: STEP: delete the pod Nov 18 07:40:43.784: INFO: Waiting for pod pod-subpath-test-configmap-6hn9 to disappear Nov 18 07:40:43.807: INFO: Pod pod-subpath-test-configmap-6hn9 no longer exists STEP: Deleting pod pod-subpath-test-configmap-6hn9 Nov 18 07:40:43.807: INFO: Deleting pod "pod-subpath-test-configmap-6hn9" in namespace "subpath-5305" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:40:43.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5305" for this suite. • [SLOW TEST:26.494 seconds] [sig-storage] Subpath /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":303,"completed":228,"skipped":3739,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:40:43.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-9044 STEP: creating service affinity-nodeport-transition in namespace services-9044 STEP: creating replication controller affinity-nodeport-transition in namespace services-9044 I1118 07:40:44.136369 10 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-9044, replica count: 3 I1118 07:40:47.187955 10 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1118 07:40:50.188962 10 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 18 07:40:50.205: INFO: Creating new exec pod Nov 18 07:40:55.241: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-9044 execpod-affinityhs9zj -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80' Nov 18 07:40:56.858: INFO: stderr: "I1118 07:40:56.730540 3520 log.go:181] (0x400003a0b0) (0x4000c6bae0) Create stream\nI1118 07:40:56.735638 3520 log.go:181] (0x400003a0b0) (0x4000c6bae0) Stream added, broadcasting: 1\nI1118 07:40:56.744919 3520 log.go:181] (0x400003a0b0) Reply frame received for 1\nI1118 07:40:56.745480 3520 log.go:181] (0x400003a0b0) (0x40008760a0) Create stream\nI1118 07:40:56.745540 3520 log.go:181] (0x400003a0b0) (0x40008760a0) Stream added, broadcasting: 3\nI1118 07:40:56.747088 3520 log.go:181] (0x400003a0b0) Reply frame received for 3\nI1118 07:40:56.747517 3520 log.go:181] (0x400003a0b0) (0x40009fc000) Create stream\nI1118 07:40:56.747633 3520 log.go:181] (0x400003a0b0) (0x40009fc000) Stream added, broadcasting: 5\nI1118 07:40:56.749316 3520 log.go:181] (0x400003a0b0) Reply frame received for 5\nI1118 07:40:56.835037 3520 log.go:181] (0x400003a0b0) Data frame received for 3\nI1118 07:40:56.835292 3520 log.go:181] (0x400003a0b0) Data frame received for 5\nI1118 07:40:56.835602 3520 log.go:181] (0x40009fc000) (5) Data frame handling\nI1118 07:40:56.835747 3520 log.go:181] (0x40008760a0) (3) Data frame handling\nI1118 07:40:56.835965 3520 log.go:181] (0x400003a0b0) Data frame received for 1\nI1118 07:40:56.836052 3520 log.go:181] (0x4000c6bae0) (1) Data frame handling\nI1118 07:40:56.837337 3520 log.go:181] (0x4000c6bae0) (1) Data frame sent\nI1118 07:40:56.837506 3520 log.go:181] (0x40009fc000) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\nI1118 07:40:56.838776 3520 log.go:181] (0x400003a0b0) Data frame received for 5\nI1118 07:40:56.838893 3520 log.go:181] (0x40009fc000) (5) Data frame handling\nI1118 07:40:56.840421 3520 log.go:181] (0x400003a0b0) (0x4000c6bae0) Stream removed, broadcasting: 1\nI1118 07:40:56.842554 3520 log.go:181] (0x400003a0b0) Go away received\nI1118 07:40:56.847177 3520 log.go:181] (0x400003a0b0) (0x4000c6bae0) Stream removed, broadcasting: 1\nI1118 07:40:56.847610 3520 log.go:181] (0x400003a0b0) (0x40008760a0) Stream removed, broadcasting: 3\nI1118 07:40:56.847927 3520 log.go:181] (0x400003a0b0) (0x40009fc000) Stream removed, broadcasting: 5\n" Nov 18 07:40:56.859: INFO: stdout: "" Nov 18 07:40:56.864: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-9044 execpod-affinityhs9zj -- /bin/sh -x -c nc -zv -t -w 2 10.96.225.23 80' Nov 18 07:40:58.582: INFO: stderr: "I1118 07:40:58.452379 3540 log.go:181] (0x40009e0000) (0x4000638280) Create stream\nI1118 07:40:58.457367 3540 log.go:181] (0x40009e0000) (0x4000638280) Stream added, broadcasting: 1\nI1118 07:40:58.467239 3540 log.go:181] (0x40009e0000) Reply frame received for 1\nI1118 07:40:58.467905 3540 log.go:181] (0x40009e0000) (0x400025c000) Create stream\nI1118 07:40:58.467965 3540 log.go:181] (0x40009e0000) (0x400025c000) Stream added, broadcasting: 3\nI1118 07:40:58.469219 3540 log.go:181] (0x40009e0000) Reply frame received for 3\nI1118 07:40:58.469420 3540 log.go:181] (0x40009e0000) (0x40006383c0) Create stream\nI1118 07:40:58.469470 3540 log.go:181] (0x40009e0000) (0x40006383c0) Stream added, broadcasting: 5\nI1118 07:40:58.470596 3540 log.go:181] (0x40009e0000) Reply frame received for 5\nI1118 07:40:58.564636 3540 log.go:181] (0x40009e0000) Data frame received for 3\nI1118 07:40:58.565035 3540 log.go:181] (0x400025c000) (3) Data frame handling\nI1118 07:40:58.565598 3540 log.go:181] (0x40009e0000) Data frame received for 5\nI1118 07:40:58.565772 3540 log.go:181] (0x40006383c0) (5) Data frame handling\nI1118 07:40:58.566093 3540 log.go:181] (0x40009e0000) Data frame received for 1\nI1118 07:40:58.566272 3540 log.go:181] (0x4000638280) (1) Data frame handling\nI1118 07:40:58.568012 3540 log.go:181] (0x4000638280) (1) Data frame sent\nI1118 07:40:58.569604 3540 log.go:181] (0x40006383c0) (5) Data frame sent\nI1118 07:40:58.569682 3540 log.go:181] (0x40009e0000) Data frame received for 5\nI1118 07:40:58.569734 3540 log.go:181] (0x40006383c0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.225.23 80\nConnection to 10.96.225.23 80 port [tcp/http] succeeded!\nI1118 07:40:58.570993 3540 log.go:181] (0x40009e0000) (0x4000638280) Stream removed, broadcasting: 1\nI1118 07:40:58.571926 3540 log.go:181] (0x40009e0000) Go away received\nI1118 07:40:58.575322 3540 log.go:181] (0x40009e0000) (0x4000638280) Stream removed, broadcasting: 1\nI1118 07:40:58.575613 3540 log.go:181] (0x40009e0000) (0x400025c000) Stream removed, broadcasting: 3\nI1118 07:40:58.575799 3540 log.go:181] (0x40009e0000) (0x40006383c0) Stream removed, broadcasting: 5\n" Nov 18 07:40:58.583: INFO: stdout: "" Nov 18 07:40:58.583: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-9044 execpod-affinityhs9zj -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.18 31544' Nov 18 07:41:00.217: INFO: stderr: "I1118 07:41:00.096166 3560 log.go:181] (0x40004dc160) (0x4000819220) Create stream\nI1118 07:41:00.099225 3560 log.go:181] (0x40004dc160) (0x4000819220) Stream added, broadcasting: 1\nI1118 07:41:00.108736 3560 log.go:181] (0x40004dc160) Reply frame received for 1\nI1118 07:41:00.109772 3560 log.go:181] (0x40004dc160) (0x4000300000) Create stream\nI1118 07:41:00.109883 3560 log.go:181] (0x40004dc160) (0x4000300000) Stream added, broadcasting: 3\nI1118 07:41:00.111132 3560 log.go:181] (0x40004dc160) Reply frame received for 3\nI1118 07:41:00.111332 3560 log.go:181] (0x40004dc160) (0x40006bd220) Create stream\nI1118 07:41:00.111378 3560 log.go:181] (0x40004dc160) (0x40006bd220) Stream added, broadcasting: 5\nI1118 07:41:00.112339 3560 log.go:181] (0x40004dc160) Reply frame received for 5\nI1118 07:41:00.195922 3560 log.go:181] (0x40004dc160) Data frame received for 3\nI1118 07:41:00.196635 3560 log.go:181] (0x40004dc160) Data frame received for 5\nI1118 07:41:00.196789 3560 log.go:181] (0x40006bd220) (5) Data frame handling\nI1118 07:41:00.197109 3560 log.go:181] (0x4000300000) (3) Data frame handling\nI1118 07:41:00.197883 3560 log.go:181] (0x40004dc160) Data frame received for 1\nI1118 07:41:00.198085 3560 log.go:181] (0x4000819220) (1) Data frame handling\nI1118 07:41:00.198782 3560 log.go:181] (0x40006bd220) (5) Data frame sent\nI1118 07:41:00.199040 3560 log.go:181] (0x4000819220) (1) Data frame sent\n+ nc -zv -t -w 2 172.18.0.18 31544\nConnection to 172.18.0.18 31544 port [tcp/31544] succeeded!\nI1118 07:41:00.199739 3560 log.go:181] (0x40004dc160) Data frame received for 5\nI1118 07:41:00.199806 3560 log.go:181] (0x40006bd220) (5) Data frame handling\nI1118 07:41:00.201133 3560 log.go:181] (0x40004dc160) (0x4000819220) Stream removed, broadcasting: 1\nI1118 07:41:00.204162 3560 log.go:181] (0x40004dc160) Go away received\nI1118 07:41:00.207622 3560 log.go:181] (0x40004dc160) (0x4000819220) Stream removed, broadcasting: 1\nI1118 07:41:00.208002 3560 log.go:181] (0x40004dc160) (0x4000300000) Stream removed, broadcasting: 3\nI1118 07:41:00.208245 3560 log.go:181] (0x40004dc160) (0x40006bd220) Stream removed, broadcasting: 5\n" Nov 18 07:41:00.218: INFO: stdout: "" Nov 18 07:41:00.218: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-9044 execpod-affinityhs9zj -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.17 31544' Nov 18 07:41:01.907: INFO: stderr: "I1118 07:41:01.768026 3580 log.go:181] (0x400003a0b0) (0x4000b6a140) Create stream\nI1118 07:41:01.775465 3580 log.go:181] (0x400003a0b0) (0x4000b6a140) Stream added, broadcasting: 1\nI1118 07:41:01.785942 3580 log.go:181] (0x400003a0b0) Reply frame received for 1\nI1118 07:41:01.786492 3580 log.go:181] (0x400003a0b0) (0x4000e18000) Create stream\nI1118 07:41:01.786549 3580 log.go:181] (0x400003a0b0) (0x4000e18000) Stream added, broadcasting: 3\nI1118 07:41:01.787869 3580 log.go:181] (0x400003a0b0) Reply frame received for 3\nI1118 07:41:01.788208 3580 log.go:181] (0x400003a0b0) (0x4000b6a1e0) Create stream\nI1118 07:41:01.788278 3580 log.go:181] (0x400003a0b0) (0x4000b6a1e0) Stream added, broadcasting: 5\nI1118 07:41:01.789641 3580 log.go:181] (0x400003a0b0) Reply frame received for 5\nI1118 07:41:01.885470 3580 log.go:181] (0x400003a0b0) Data frame received for 3\nI1118 07:41:01.885817 3580 log.go:181] (0x400003a0b0) Data frame received for 5\nI1118 07:41:01.885963 3580 log.go:181] (0x4000b6a1e0) (5) Data frame handling\nI1118 07:41:01.886063 3580 log.go:181] (0x4000e18000) (3) Data frame handling\nI1118 07:41:01.886367 3580 log.go:181] (0x400003a0b0) Data frame received for 1\nI1118 07:41:01.886495 3580 log.go:181] (0x4000b6a140) (1) Data frame handling\n+ nc -zv -t -w 2 172.18.0.17 31544\nConnection to 172.18.0.17 31544 port [tcp/31544] succeeded!\nI1118 07:41:01.888391 3580 log.go:181] (0x4000b6a140) (1) Data frame sent\nI1118 07:41:01.888606 3580 log.go:181] (0x4000b6a1e0) (5) Data frame sent\nI1118 07:41:01.888751 3580 log.go:181] (0x400003a0b0) Data frame received for 5\nI1118 07:41:01.888934 3580 log.go:181] (0x4000b6a1e0) (5) Data frame handling\nI1118 07:41:01.889828 3580 log.go:181] (0x400003a0b0) (0x4000b6a140) Stream removed, broadcasting: 1\nI1118 07:41:01.891772 3580 log.go:181] (0x400003a0b0) Go away received\nI1118 07:41:01.896041 3580 log.go:181] (0x400003a0b0) (0x4000b6a140) Stream removed, broadcasting: 1\nI1118 07:41:01.896445 3580 log.go:181] (0x400003a0b0) (0x4000e18000) Stream removed, broadcasting: 3\nI1118 07:41:01.896783 3580 log.go:181] (0x400003a0b0) (0x4000b6a1e0) Stream removed, broadcasting: 5\n" Nov 18 07:41:01.908: INFO: stdout: "" Nov 18 07:41:01.936: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-9044 execpod-affinityhs9zj -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.18:31544/ ; done' Nov 18 07:41:03.622: INFO: stderr: "I1118 07:41:03.393825 3600 log.go:181] (0x4000416160) (0x400025a1e0) Create stream\nI1118 07:41:03.401467 3600 log.go:181] (0x4000416160) (0x400025a1e0) Stream added, broadcasting: 1\nI1118 07:41:03.411314 3600 log.go:181] (0x4000416160) Reply frame received for 1\nI1118 07:41:03.412204 3600 log.go:181] (0x4000416160) (0x400025a280) Create stream\nI1118 07:41:03.412285 3600 log.go:181] (0x4000416160) (0x400025a280) Stream added, broadcasting: 3\nI1118 07:41:03.414408 3600 log.go:181] (0x4000416160) Reply frame received for 3\nI1118 07:41:03.414938 3600 log.go:181] (0x4000416160) (0x4000642000) Create stream\nI1118 07:41:03.415087 3600 log.go:181] (0x4000416160) (0x4000642000) Stream added, broadcasting: 5\nI1118 07:41:03.416520 3600 log.go:181] (0x4000416160) Reply frame received for 5\nI1118 07:41:03.520069 3600 log.go:181] (0x4000416160) Data frame received for 5\nI1118 07:41:03.525969 3600 log.go:181] (0x4000642000) (5) Data frame handling\nI1118 07:41:03.526981 3600 log.go:181] (0x4000416160) Data frame received for 3\nI1118 07:41:03.527222 3600 log.go:181] (0x400025a280) (3) Data frame handling\nI1118 07:41:03.527928 3600 log.go:181] (0x4000642000) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:31544/\nI1118 07:41:03.528301 3600 log.go:181] (0x400025a280) (3) Data frame sent\nI1118 07:41:03.529204 3600 log.go:181] (0x4000416160) Data frame received for 5\nI1118 07:41:03.529304 3600 log.go:181] (0x4000642000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:31544/\nI1118 07:41:03.530725 3600 log.go:181] (0x4000642000) (5) Data frame sent\nI1118 07:41:03.531694 3600 log.go:181] (0x4000416160) Data frame received for 3\nI1118 07:41:03.531778 3600 log.go:181] (0x400025a280) (3) Data frame handling\nI1118 07:41:03.531869 3600 log.go:181] (0x400025a280) (3) Data frame sent\nI1118 07:41:03.531951 3600 log.go:181] (0x4000416160) Data frame received for 5\nI1118 07:41:03.532045 3600 log.go:181] (0x4000416160) Data frame received for 3\nI1118 07:41:03.532106 3600 log.go:181] (0x400025a280) (3) Data frame handling\nI1118 07:41:03.532170 3600 log.go:181] (0x400025a280) (3) Data frame sent\nI1118 07:41:03.532258 3600 log.go:181] (0x4000416160) Data frame received for 3\nI1118 07:41:03.532318 3600 log.go:181] (0x400025a280) (3) Data frame handling\nI1118 07:41:03.532384 3600 log.go:181] (0x400025a280) (3) Data frame sent\nI1118 07:41:03.532443 3600 log.go:181] (0x4000642000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:31544/\nI1118 07:41:03.532550 3600 log.go:181] (0x4000416160) Data frame received for 3\nI1118 07:41:03.532653 3600 log.go:181] (0x400025a280) (3) Data frame handling\nI1118 07:41:03.532764 3600 log.go:181] (0x4000642000) (5) Data frame sent\nI1118 07:41:03.532992 3600 log.go:181] (0x400025a280) (3) Data frame sent\nI1118 07:41:03.535658 3600 log.go:181] (0x4000416160) Data frame received for 3\nI1118 07:41:03.535735 3600 log.go:181] (0x400025a280) (3) Data frame handling\nI1118 07:41:03.535817 3600 log.go:181] (0x400025a280) (3) Data frame sent\nI1118 07:41:03.536166 3600 log.go:181] (0x4000416160) Data frame received for 3\nI1118 07:41:03.536237 3600 log.go:181] (0x400025a280) (3) Data frame handling\nI1118 07:41:03.536305 3600 log.go:181] (0x400025a280) (3) Data frame sent\nI1118 07:41:03.536386 3600 log.go:181] (0x4000416160) Data frame received for 5\nI1118 07:41:03.536457 3600 log.go:181] (0x4000642000) (5) Data frame handling\nI1118 07:41:03.536543 3600 log.go:181] (0x4000642000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:31544/\nI1118 07:41:03.540414 3600 log.go:181] (0x4000416160) Data frame received for 3\nI1118 07:41:03.540509 3600 log.go:181] (0x400025a280) (3) Data frame handling\nI1118 07:41:03.540629 3600 log.go:181] (0x400025a280) (3) Data frame sent\nI1118 07:41:03.540831 3600 log.go:181] (0x4000416160) Data frame received for 5\nI1118 07:41:03.540952 3600 log.go:181] (0x4000642000) (5) Data frame handling\nI1118 07:41:03.541023 3600 log.go:181] (0x4000642000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:31544/\nI1118 07:41:03.541087 3600 log.go:181] (0x4000416160) Data frame received for 3\nI1118 07:41:03.541190 3600 log.go:181] (0x400025a280) (3) Data frame handling\nI1118 07:41:03.541255 3600 log.go:181] (0x400025a280) (3) Data frame sent\nI1118 07:41:03.544413 3600 log.go:181] (0x4000416160) Data frame received for 3\nI1118 07:41:03.544487 3600 log.go:181] (0x400025a280) (3) Data frame handling\nI1118 07:41:03.544562 3600 log.go:181] (0x400025a280) (3) Data frame sent\nI1118 07:41:03.545278 3600 log.go:181] (0x4000416160) Data frame received for 3\nI1118 07:41:03.545367 3600 log.go:181] (0x400025a280) (3) Data frame handling\nI1118 07:41:03.545444 3600 log.go:181] (0x4000416160) Data frame received for 5\nI1118 07:41:03.545522 3600 log.go:181] (0x4000642000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:31544/\nI1118 07:41:03.545608 3600 log.go:181] (0x400025a280) (3) Data frame sent\nI1118 07:41:03.545710 3600 log.go:181] (0x4000642000) (5) Data frame sent\nI1118 07:41:03.548233 3600 log.go:181] (0x4000416160) Data frame received for 3\nI1118 07:41:03.548298 3600 log.go:181] (0x400025a280) (3) Data frame handling\nI1118 07:41:03.548375 3600 log.go:181] (0x400025a280) (3) Data frame sent\nI1118 07:41:03.549085 3600 log.go:181] (0x4000416160) Data frame received for 3\nI1118 07:41:03.549181 3600 log.go:181] (0x400025a280) (3) Data frame handling\nI1118 07:41:03.549258 3600 log.go:181] (0x4000416160) Data frame received for 5\nI1118 07:41:03.549347 3600 log.go:181] (0x4000642000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:31544/\nI1118 07:41:03.549425 3600 log.go:181] (0x400025a280) (3) Data frame sent\nI1118 07:41:03.549498 3600 log.go:181] (0x4000642000) (5) Data frame sent\nI1118 07:41:03.552253 3600 log.go:181] (0x4000416160) Data frame received for 3\nI1118 07:41:03.552353 3600 log.go:181] (0x400025a280) (3) Data frame handling\nI1118 07:41:03.552447 3600 log.go:181] (0x400025a280) (3) Data frame sent\nI1118 07:41:03.552638 3600 log.go:181] (0x4000416160) Data frame received for 5\nI1118 07:41:03.552755 3600 log.go:181] (0x4000642000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:31544/\nI1118 07:41:03.552826 3600 log.go:181] (0x4000416160) Data frame received for 3\nI1118 07:41:03.553316 3600 log.go:181] (0x400025a280) (3) Data frame handling\nI1118 07:41:03.553456 3600 log.go:181] (0x4000642000) (5) Data frame sent\nI1118 07:41:03.553541 3600 log.go:181] (0x400025a280) (3) Data frame sent\nI1118 07:41:03.555997 3600 log.go:181] (0x4000416160) Data frame received for 3\nI1118 07:41:03.556071 3600 log.go:181] (0x400025a280) (3) Data frame handling\nI1118 07:41:03.556155 3600 log.go:181] (0x400025a280) (3) Data frame sent\nI1118 07:41:03.556763 3600 log.go:181] (0x4000416160) Data frame received for 3\nI1118 07:41:03.556933 3600 log.go:181] (0x400025a280) (3) Data frame handling\nI1118 07:41:03.557048 3600 log.go:181] (0x400025a280) (3) Data frame sent\nI1118 07:41:03.557142 3600 log.go:181] (0x4000416160) Data frame received for 5\nI1118 07:41:03.557236 3600 log.go:181] (0x4000642000) (5) Data frame handling\nI1118 07:41:03.557341 3600 log.go:181] (0x4000642000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:31544/\nI1118 07:41:03.561740 3600 log.go:181] (0x4000416160) Data frame received for 3\nI1118 07:41:03.561823 3600 log.go:181] (0x400025a280) (3) Data frame handling\nI1118 07:41:03.561928 3600 log.go:181] (0x400025a280) (3) Data frame sent\nI1118 07:41:03.562374 3600 log.go:181] (0x4000416160) Data frame received for 5\nI1118 07:41:03.562470 3600 log.go:181] (0x4000642000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:31544/\nI1118 07:41:03.562558 3600 log.go:181] (0x4000416160) Data frame received for 3\nI1118 07:41:03.562663 3600 log.go:181] (0x400025a280) (3) Data frame handling\nI1118 07:41:03.562738 3600 log.go:181] (0x4000642000) (5) Data frame sent\nI1118 07:41:03.562813 3600 log.go:181] (0x400025a280) (3) Data frame sent\nI1118 07:41:03.567815 3600 log.go:181] (0x4000416160) Data frame received for 3\nI1118 07:41:03.567904 3600 log.go:181] (0x400025a280) (3) Data frame handling\nI1118 07:41:03.568005 3600 log.go:181] (0x400025a280) (3) Data frame sent\nI1118 07:41:03.568745 3600 log.go:181] (0x4000416160) Data frame received for 5\nI1118 07:41:03.568958 3600 log.go:181] (0x4000642000) (5) Data frame handling\nI1118 07:41:03.569084 3600 log.go:181] (0x4000642000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:31544/\nI1118 07:41:03.569194 3600 log.go:181] (0x4000416160) Data frame received for 3\nI1118 07:41:03.569277 3600 log.go:181] (0x400025a280) (3) Data frame handling\nI1118 07:41:03.569370 3600 log.go:181] (0x400025a280) (3) Data frame sent\nI1118 07:41:03.575198 3600 log.go:181] (0x4000416160) Data frame received for 3\nI1118 07:41:03.575340 3600 log.go:181] (0x400025a280) (3) Data frame handling\nI1118 07:41:03.575503 3600 log.go:181] (0x400025a280) (3) Data frame sent\nI1118 07:41:03.575728 3600 log.go:181] (0x4000416160) Data frame received for 5\nI1118 07:41:03.575833 3600 log.go:181] (0x4000642000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:31544/\nI1118 07:41:03.575950 3600 log.go:181] (0x4000416160) Data frame received for 3\nI1118 07:41:03.576086 3600 log.go:181] (0x400025a280) (3) Data frame handling\nI1118 07:41:03.576194 3600 log.go:181] (0x400025a280) (3) Data frame sent\nI1118 07:41:03.576290 3600 log.go:181] (0x4000642000) (5) Data frame sent\nI1118 07:41:03.580786 3600 log.go:181] (0x4000416160) Data frame received for 3\nI1118 07:41:03.580918 3600 log.go:181] (0x400025a280) (3) Data frame handling\nI1118 07:41:03.580990 3600 log.go:181] (0x400025a280) (3) Data frame sent\nI1118 07:41:03.581701 3600 log.go:181] (0x4000416160) Data frame received for 3\nI1118 07:41:03.581798 3600 log.go:181] (0x4000416160) Data frame received for 5\nI1118 07:41:03.581946 3600 log.go:181] (0x4000642000) (5) Data frame handling\nI1118 07:41:03.582068 3600 log.go:181] (0x4000642000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:31544/\nI1118 07:41:03.582182 3600 log.go:181] (0x400025a280) (3) Data frame handling\nI1118 07:41:03.582301 3600 log.go:181] (0x400025a280) (3) Data frame sent\nI1118 07:41:03.586208 3600 log.go:181] (0x4000416160) Data frame received for 3\nI1118 07:41:03.586315 3600 log.go:181] (0x400025a280) (3) Data frame handling\nI1118 07:41:03.586434 3600 log.go:181] (0x400025a280) (3) Data frame sent\nI1118 07:41:03.586933 3600 log.go:181] (0x4000416160) Data frame received for 5\nI1118 07:41:03.587102 3600 log.go:181] (0x4000642000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:31544/\nI1118 07:41:03.587257 3600 log.go:181] (0x4000416160) Data frame received for 3\nI1118 07:41:03.587378 3600 log.go:181] (0x400025a280) (3) Data frame handling\nI1118 07:41:03.587512 3600 log.go:181] (0x4000642000) (5) Data frame sent\nI1118 07:41:03.587661 3600 log.go:181] (0x400025a280) (3) Data frame sent\nI1118 07:41:03.590591 3600 log.go:181] (0x4000416160) Data frame received for 3\nI1118 07:41:03.590672 3600 log.go:181] (0x400025a280) (3) Data frame handling\nI1118 07:41:03.590771 3600 log.go:181] (0x400025a280) (3) Data frame sent\nI1118 07:41:03.591206 3600 log.go:181] (0x4000416160) Data frame received for 5\nI1118 07:41:03.591266 3600 log.go:181] (0x4000642000) (5) Data frame handling\nI1118 07:41:03.591326 3600 log.go:181] (0x4000642000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:31544/I1118 07:41:03.591383 3600 log.go:181] (0x4000416160) Data frame received for 5\nI1118 07:41:03.591469 3600 log.go:181] (0x4000416160) Data frame received for 3\nI1118 07:41:03.591619 3600 log.go:181] (0x400025a280) (3) Data frame handling\nI1118 07:41:03.591752 3600 log.go:181] (0x4000642000) (5) Data frame handling\n\nI1118 07:41:03.591883 3600 log.go:181] (0x400025a280) (3) Data frame sent\nI1118 07:41:03.592013 3600 log.go:181] (0x4000642000) (5) Data frame sent\nI1118 07:41:03.594979 3600 log.go:181] (0x4000416160) Data frame received for 3\nI1118 07:41:03.595093 3600 log.go:181] (0x400025a280) (3) Data frame handling\nI1118 07:41:03.595218 3600 log.go:181] (0x400025a280) (3) Data frame sent\nI1118 07:41:03.595531 3600 log.go:181] (0x4000416160) Data frame received for 5\nI1118 07:41:03.595620 3600 log.go:181] (0x4000642000) (5) Data frame handling\nI1118 07:41:03.595703 3600 log.go:181] (0x4000642000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:31544/\nI1118 07:41:03.596067 3600 log.go:181] (0x4000416160) Data frame received for 3\nI1118 07:41:03.596155 3600 log.go:181] (0x400025a280) (3) Data frame handling\nI1118 07:41:03.596295 3600 log.go:181] (0x400025a280) (3) Data frame sent\nI1118 07:41:03.602905 3600 log.go:181] (0x4000416160) Data frame received for 3\nI1118 07:41:03.603047 3600 log.go:181] (0x400025a280) (3) Data frame handling\nI1118 07:41:03.603186 3600 log.go:181] (0x400025a280) (3) Data frame sent\nI1118 07:41:03.603685 3600 log.go:181] (0x4000416160) Data frame received for 5\nI1118 07:41:03.603812 3600 log.go:181] (0x4000642000) (5) Data frame handling\nI1118 07:41:03.603917 3600 log.go:181] (0x4000416160) Data frame received for 3\nI1118 07:41:03.604024 3600 log.go:181] (0x400025a280) (3) Data frame handling\nI1118 07:41:03.605437 3600 log.go:181] (0x4000416160) Data frame received for 1\nI1118 07:41:03.605530 3600 log.go:181] (0x400025a1e0) (1) Data frame handling\nI1118 07:41:03.605647 3600 log.go:181] (0x400025a1e0) (1) Data frame sent\nI1118 07:41:03.606211 3600 log.go:181] (0x4000416160) (0x400025a1e0) Stream removed, broadcasting: 1\nI1118 07:41:03.608075 3600 log.go:181] (0x4000416160) Go away received\nI1118 07:41:03.612132 3600 log.go:181] (0x4000416160) (0x400025a1e0) Stream removed, broadcasting: 1\nI1118 07:41:03.612530 3600 log.go:181] (0x4000416160) (0x400025a280) Stream removed, broadcasting: 3\nI1118 07:41:03.612801 3600 log.go:181] (0x4000416160) (0x4000642000) Stream removed, broadcasting: 5\n" Nov 18 07:41:03.627: INFO: stdout: "\naffinity-nodeport-transition-5tcdt\naffinity-nodeport-transition-h4t66\naffinity-nodeport-transition-h4t66\naffinity-nodeport-transition-76pv6\naffinity-nodeport-transition-76pv6\naffinity-nodeport-transition-76pv6\naffinity-nodeport-transition-76pv6\naffinity-nodeport-transition-h4t66\naffinity-nodeport-transition-5tcdt\naffinity-nodeport-transition-5tcdt\naffinity-nodeport-transition-h4t66\naffinity-nodeport-transition-h4t66\naffinity-nodeport-transition-5tcdt\naffinity-nodeport-transition-5tcdt\naffinity-nodeport-transition-5tcdt\naffinity-nodeport-transition-76pv6" Nov 18 07:41:03.627: INFO: Received response from host: affinity-nodeport-transition-5tcdt Nov 18 07:41:03.627: INFO: Received response from host: affinity-nodeport-transition-h4t66 Nov 18 07:41:03.627: INFO: Received response from host: affinity-nodeport-transition-h4t66 Nov 18 07:41:03.628: INFO: Received response from host: affinity-nodeport-transition-76pv6 Nov 18 07:41:03.628: INFO: Received response from host: affinity-nodeport-transition-76pv6 Nov 18 07:41:03.628: INFO: Received response from host: affinity-nodeport-transition-76pv6 Nov 18 07:41:03.628: INFO: Received response from host: affinity-nodeport-transition-76pv6 Nov 18 07:41:03.628: INFO: Received response from host: affinity-nodeport-transition-h4t66 Nov 18 07:41:03.628: INFO: Received response from host: affinity-nodeport-transition-5tcdt Nov 18 07:41:03.628: INFO: Received response from host: affinity-nodeport-transition-5tcdt Nov 18 07:41:03.628: INFO: Received response from host: affinity-nodeport-transition-h4t66 Nov 18 07:41:03.628: INFO: Received response from host: affinity-nodeport-transition-h4t66 Nov 18 07:41:03.628: INFO: Received response from host: affinity-nodeport-transition-5tcdt Nov 18 07:41:03.628: INFO: Received response from host: affinity-nodeport-transition-5tcdt Nov 18 07:41:03.628: INFO: Received response from host: affinity-nodeport-transition-5tcdt Nov 18 07:41:03.628: INFO: Received response from host: affinity-nodeport-transition-76pv6 Nov 18 07:41:03.642: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-9044 execpod-affinityhs9zj -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.18:31544/ ; done' Nov 18 07:41:05.333: INFO: stderr: "I1118 07:41:05.105337 3620 log.go:181] (0x40000f2000) (0x4000b0c0a0) Create stream\nI1118 07:41:05.110413 3620 log.go:181] (0x40000f2000) (0x4000b0c0a0) Stream added, broadcasting: 1\nI1118 07:41:05.126233 3620 log.go:181] (0x40000f2000) Reply frame received for 1\nI1118 07:41:05.126883 3620 log.go:181] (0x40000f2000) (0x4000400960) Create stream\nI1118 07:41:05.126948 3620 log.go:181] (0x40000f2000) (0x4000400960) Stream added, broadcasting: 3\nI1118 07:41:05.128434 3620 log.go:181] (0x40000f2000) Reply frame received for 3\nI1118 07:41:05.128932 3620 log.go:181] (0x40000f2000) (0x40007240a0) Create stream\nI1118 07:41:05.129063 3620 log.go:181] (0x40000f2000) (0x40007240a0) Stream added, broadcasting: 5\nI1118 07:41:05.130794 3620 log.go:181] (0x40000f2000) Reply frame received for 5\nI1118 07:41:05.217201 3620 log.go:181] (0x40000f2000) Data frame received for 3\nI1118 07:41:05.217796 3620 log.go:181] (0x40000f2000) Data frame received for 5\nI1118 07:41:05.218029 3620 log.go:181] (0x40007240a0) (5) Data frame handling\nI1118 07:41:05.218195 3620 log.go:181] (0x4000400960) (3) Data frame handling\nI1118 07:41:05.218882 3620 log.go:181] (0x4000400960) (3) Data frame sent\nI1118 07:41:05.219126 3620 log.go:181] (0x40007240a0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:31544/\nI1118 07:41:05.223179 3620 log.go:181] (0x40000f2000) Data frame received for 3\nI1118 07:41:05.223285 3620 log.go:181] (0x4000400960) (3) Data frame handling\nI1118 07:41:05.223377 3620 log.go:181] (0x4000400960) (3) Data frame sent\nI1118 07:41:05.223900 3620 log.go:181] (0x40000f2000) Data frame received for 5\nI1118 07:41:05.224021 3620 log.go:181] (0x40007240a0) (5) Data frame handling\nI1118 07:41:05.224116 3620 log.go:181] (0x40007240a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:31544/\nI1118 07:41:05.224238 3620 log.go:181] (0x40000f2000) Data frame received for 3\nI1118 07:41:05.224331 3620 log.go:181] (0x4000400960) (3) Data frame handling\nI1118 07:41:05.224427 3620 log.go:181] (0x4000400960) (3) Data frame sent\nI1118 07:41:05.229263 3620 log.go:181] (0x40000f2000) Data frame received for 3\nI1118 07:41:05.229455 3620 log.go:181] (0x4000400960) (3) Data frame handling\nI1118 07:41:05.229649 3620 log.go:181] (0x4000400960) (3) Data frame sent\nI1118 07:41:05.229913 3620 log.go:181] (0x40000f2000) Data frame received for 5\nI1118 07:41:05.230007 3620 log.go:181] (0x40007240a0) (5) Data frame handling\nI1118 07:41:05.230091 3620 log.go:181] (0x40007240a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:31544/\nI1118 07:41:05.230156 3620 log.go:181] (0x40000f2000) Data frame received for 3\nI1118 07:41:05.230214 3620 log.go:181] (0x4000400960) (3) Data frame handling\nI1118 07:41:05.230287 3620 log.go:181] (0x4000400960) (3) Data frame sent\nI1118 07:41:05.235291 3620 log.go:181] (0x40000f2000) Data frame received for 3\nI1118 07:41:05.235378 3620 log.go:181] (0x4000400960) (3) Data frame handling\nI1118 07:41:05.235467 3620 log.go:181] (0x4000400960) (3) Data frame sent\nI1118 07:41:05.235990 3620 log.go:181] (0x40000f2000) Data frame received for 3\nI1118 07:41:05.236075 3620 log.go:181] (0x4000400960) (3) Data frame handling\nI1118 07:41:05.236144 3620 log.go:181] (0x4000400960) (3) Data frame sent\nI1118 07:41:05.236210 3620 log.go:181] (0x40000f2000) Data frame received for 5\nI1118 07:41:05.236275 3620 log.go:181] (0x40007240a0) (5) Data frame handling\nI1118 07:41:05.236353 3620 log.go:181] (0x40007240a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:31544/\nI1118 07:41:05.240372 3620 log.go:181] (0x40000f2000) Data frame received for 3\nI1118 07:41:05.240486 3620 log.go:181] (0x4000400960) (3) Data frame handling\nI1118 07:41:05.240645 3620 log.go:181] (0x4000400960) (3) Data frame sent\nI1118 07:41:05.241057 3620 log.go:181] (0x40000f2000) Data frame received for 5\nI1118 07:41:05.241133 3620 log.go:181] (0x40000f2000) Data frame received for 3\nI1118 07:41:05.241222 3620 log.go:181] (0x4000400960) (3) Data frame handling\nI1118 07:41:05.241303 3620 log.go:181] (0x4000400960) (3) Data frame sent\nI1118 07:41:05.241369 3620 log.go:181] (0x40007240a0) (5) Data frame handling\nI1118 07:41:05.241447 3620 log.go:181] (0x40007240a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:31544/\nI1118 07:41:05.245871 3620 log.go:181] (0x40000f2000) Data frame received for 3\nI1118 07:41:05.246050 3620 log.go:181] (0x4000400960) (3) Data frame handling\nI1118 07:41:05.246219 3620 log.go:181] (0x4000400960) (3) Data frame sent\nI1118 07:41:05.246389 3620 log.go:181] (0x40000f2000) Data frame received for 3\nI1118 07:41:05.246525 3620 log.go:181] (0x4000400960) (3) Data frame handling\nI1118 07:41:05.246647 3620 log.go:181] (0x40000f2000) Data frame received for 5\nI1118 07:41:05.246792 3620 log.go:181] (0x40007240a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:31544/\nI1118 07:41:05.246901 3620 log.go:181] (0x4000400960) (3) Data frame sent\nI1118 07:41:05.247019 3620 log.go:181] (0x40007240a0) (5) Data frame sent\nI1118 07:41:05.250938 3620 log.go:181] (0x40000f2000) Data frame received for 3\nI1118 07:41:05.251104 3620 log.go:181] (0x4000400960) (3) Data frame handling\nI1118 07:41:05.251210 3620 log.go:181] (0x4000400960) (3) Data frame sent\nI1118 07:41:05.251378 3620 log.go:181] (0x40000f2000) Data frame received for 3\nI1118 07:41:05.251493 3620 log.go:181] (0x4000400960) (3) Data frame handling\nI1118 07:41:05.251599 3620 log.go:181] (0x40000f2000) Data frame received for 5\nI1118 07:41:05.251723 3620 log.go:181] (0x40007240a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:31544/\nI1118 07:41:05.251828 3620 log.go:181] (0x4000400960) (3) Data frame sent\nI1118 07:41:05.251944 3620 log.go:181] (0x40007240a0) (5) Data frame sent\nI1118 07:41:05.255370 3620 log.go:181] (0x40000f2000) Data frame received for 3\nI1118 07:41:05.255468 3620 log.go:181] (0x4000400960) (3) Data frame handling\nI1118 07:41:05.255598 3620 log.go:181] (0x4000400960) (3) Data frame sent\nI1118 07:41:05.255998 3620 log.go:181] (0x40000f2000) Data frame received for 3\nI1118 07:41:05.256112 3620 log.go:181] (0x4000400960) (3) Data frame handling\nI1118 07:41:05.256229 3620 log.go:181] (0x40000f2000) Data frame received for 5\nI1118 07:41:05.256380 3620 log.go:181] (0x40007240a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:31544/\nI1118 07:41:05.256539 3620 log.go:181] (0x4000400960) (3) Data frame sent\nI1118 07:41:05.256627 3620 log.go:181] (0x40007240a0) (5) Data frame sent\nI1118 07:41:05.263209 3620 log.go:181] (0x40000f2000) Data frame received for 3\nI1118 07:41:05.263297 3620 log.go:181] (0x4000400960) (3) Data frame handling\nI1118 07:41:05.263388 3620 log.go:181] (0x4000400960) (3) Data frame sent\nI1118 07:41:05.263829 3620 log.go:181] (0x40000f2000) Data frame received for 5\nI1118 07:41:05.263940 3620 log.go:181] (0x40007240a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:31544/\nI1118 07:41:05.264039 3620 log.go:181] (0x40000f2000) Data frame received for 3\nI1118 07:41:05.264163 3620 log.go:181] (0x4000400960) (3) Data frame handling\nI1118 07:41:05.264265 3620 log.go:181] (0x40007240a0) (5) Data frame sent\nI1118 07:41:05.264359 3620 log.go:181] (0x4000400960) (3) Data frame sent\nI1118 07:41:05.270388 3620 log.go:181] (0x40000f2000) Data frame received for 3\nI1118 07:41:05.270520 3620 log.go:181] (0x4000400960) (3) Data frame handling\nI1118 07:41:05.270626 3620 log.go:181] (0x4000400960) (3) Data frame sent\nI1118 07:41:05.270819 3620 log.go:181] (0x40000f2000) Data frame received for 3\nI1118 07:41:05.270926 3620 log.go:181] (0x4000400960) (3) Data frame handling\nI1118 07:41:05.271010 3620 log.go:181] (0x4000400960) (3) Data frame sent\nI1118 07:41:05.271088 3620 log.go:181] (0x40000f2000) Data frame received for 5\nI1118 07:41:05.271159 3620 log.go:181] (0x40007240a0) (5) Data frame handling\nI1118 07:41:05.271246 3620 log.go:181] (0x40007240a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:31544/\nI1118 07:41:05.277250 3620 log.go:181] (0x40000f2000) Data frame received for 3\nI1118 07:41:05.277409 3620 log.go:181] (0x4000400960) (3) Data frame handling\nI1118 07:41:05.277565 3620 log.go:181] (0x4000400960) (3) Data frame sent\nI1118 07:41:05.277758 3620 log.go:181] (0x40000f2000) Data frame received for 3\nI1118 07:41:05.277896 3620 log.go:181] (0x4000400960) (3) Data frame handling\nI1118 07:41:05.277999 3620 log.go:181] (0x40000f2000) Data frame received for 5\nI1118 07:41:05.278107 3620 log.go:181] (0x40007240a0) (5) Data frame handling\nI1118 07:41:05.278188 3620 log.go:181] (0x40007240a0) (5) Data frame sent\nI1118 07:41:05.278258 3620 log.go:181] (0x4000400960) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:31544/\nI1118 07:41:05.283984 3620 log.go:181] (0x40000f2000) Data frame received for 3\nI1118 07:41:05.284108 3620 log.go:181] (0x4000400960) (3) Data frame handling\nI1118 07:41:05.284241 3620 log.go:181] (0x4000400960) (3) Data frame sent\nI1118 07:41:05.284757 3620 log.go:181] (0x40000f2000) Data frame received for 5\nI1118 07:41:05.284956 3620 log.go:181] (0x40007240a0) (5) Data frame handling\nI1118 07:41:05.285081 3620 log.go:181] (0x40007240a0) (5) Data frame sent\n+ echo\n+ curlI1118 07:41:05.285181 3620 log.go:181] (0x40000f2000) Data frame received for 5\nI1118 07:41:05.285314 3620 log.go:181] (0x40000f2000) Data frame received for 3\nI1118 07:41:05.285506 3620 log.go:181] (0x4000400960) (3) Data frame handling\nI1118 07:41:05.285667 3620 log.go:181] (0x4000400960) (3) Data frame sent\nI1118 07:41:05.285804 3620 log.go:181] (0x40007240a0) (5) Data frame handling\n -q -s --connect-timeout 2 http://172.18.0.18:31544/\nI1118 07:41:05.285964 3620 log.go:181] (0x40007240a0) (5) Data frame sent\nI1118 07:41:05.288770 3620 log.go:181] (0x40000f2000) Data frame received for 3\nI1118 07:41:05.288954 3620 log.go:181] (0x4000400960) (3) Data frame handling\nI1118 07:41:05.289085 3620 log.go:181] (0x4000400960) (3) Data frame sent\nI1118 07:41:05.289637 3620 log.go:181] (0x40000f2000) Data frame received for 5\nI1118 07:41:05.289746 3620 log.go:181] (0x40007240a0) (5) Data frame handling\n+ echo\n+ curlI1118 07:41:05.289833 3620 log.go:181] (0x40000f2000) Data frame received for 3\nI1118 07:41:05.289923 3620 log.go:181] (0x4000400960) (3) Data frame handling\nI1118 07:41:05.290002 3620 log.go:181] (0x4000400960) (3) Data frame sent\nI1118 07:41:05.290124 3620 log.go:181] (0x40007240a0) (5) Data frame sent\nI1118 07:41:05.290240 3620 log.go:181] (0x40000f2000) Data frame received for 5\nI1118 07:41:05.290330 3620 log.go:181] (0x40007240a0) (5) Data frame handling\nI1118 07:41:05.290442 3620 log.go:181] (0x40007240a0) (5) Data frame sent\n -q -s --connect-timeout 2 http://172.18.0.18:31544/\nI1118 07:41:05.294752 3620 log.go:181] (0x40000f2000) Data frame received for 3\nI1118 07:41:05.294851 3620 log.go:181] (0x4000400960) (3) Data frame handling\nI1118 07:41:05.294956 3620 log.go:181] (0x4000400960) (3) Data frame sent\nI1118 07:41:05.295234 3620 log.go:181] (0x40000f2000) Data frame received for 5\nI1118 07:41:05.295395 3620 log.go:181] (0x40007240a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:31544/\nI1118 07:41:05.295530 3620 log.go:181] (0x40000f2000) Data frame received for 3\nI1118 07:41:05.295679 3620 log.go:181] (0x4000400960) (3) Data frame handling\nI1118 07:41:05.295824 3620 log.go:181] (0x40007240a0) (5) Data frame sent\nI1118 07:41:05.295972 3620 log.go:181] (0x4000400960) (3) Data frame sent\nI1118 07:41:05.300643 3620 log.go:181] (0x40000f2000) Data frame received for 3\nI1118 07:41:05.300766 3620 log.go:181] (0x4000400960) (3) Data frame handling\nI1118 07:41:05.300999 3620 log.go:181] (0x4000400960) (3) Data frame sent\nI1118 07:41:05.301746 3620 log.go:181] (0x40000f2000) Data frame received for 5\nI1118 07:41:05.301890 3620 log.go:181] (0x40007240a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:31544/\nI1118 07:41:05.302130 3620 log.go:181] (0x40007240a0) (5) Data frame sent\nI1118 07:41:05.305393 3620 log.go:181] (0x40000f2000) Data frame received for 3\nI1118 07:41:05.305503 3620 log.go:181] (0x4000400960) (3) Data frame handling\nI1118 07:41:05.305612 3620 log.go:181] (0x4000400960) (3) Data frame sent\nI1118 07:41:05.309602 3620 log.go:181] (0x40000f2000) Data frame received for 3\nI1118 07:41:05.309681 3620 log.go:181] (0x4000400960) (3) Data frame handling\nI1118 07:41:05.309753 3620 log.go:181] (0x4000400960) (3) Data frame sent\nI1118 07:41:05.309995 3620 log.go:181] (0x40000f2000) Data frame received for 3\nI1118 07:41:05.310080 3620 log.go:181] (0x4000400960) (3) Data frame handling\nI1118 07:41:05.310202 3620 log.go:181] (0x40000f2000) Data frame received for 5\nI1118 07:41:05.310322 3620 log.go:181] (0x40007240a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:31544/\nI1118 07:41:05.310410 3620 log.go:181] (0x4000400960) (3) Data frame sent\nI1118 07:41:05.310534 3620 log.go:181] (0x40007240a0) (5) Data frame sent\nI1118 07:41:05.315718 3620 log.go:181] (0x40000f2000) Data frame received for 3\nI1118 07:41:05.315799 3620 log.go:181] (0x4000400960) (3) Data frame handling\nI1118 07:41:05.315898 3620 log.go:181] (0x4000400960) (3) Data frame sent\nI1118 07:41:05.316524 3620 log.go:181] (0x40000f2000) Data frame received for 3\nI1118 07:41:05.316623 3620 log.go:181] (0x4000400960) (3) Data frame handling\nI1118 07:41:05.316732 3620 log.go:181] (0x40000f2000) Data frame received for 5\nI1118 07:41:05.316922 3620 log.go:181] (0x40007240a0) (5) Data frame handling\nI1118 07:41:05.318363 3620 log.go:181] (0x40000f2000) Data frame received for 1\nI1118 07:41:05.318434 3620 log.go:181] (0x4000b0c0a0) (1) Data frame handling\nI1118 07:41:05.318511 3620 log.go:181] (0x4000b0c0a0) (1) Data frame sent\nI1118 07:41:05.319290 3620 log.go:181] (0x40000f2000) (0x4000b0c0a0) Stream removed, broadcasting: 1\nI1118 07:41:05.321597 3620 log.go:181] (0x40000f2000) Go away received\nI1118 07:41:05.324296 3620 log.go:181] (0x40000f2000) (0x4000b0c0a0) Stream removed, broadcasting: 1\nI1118 07:41:05.324584 3620 log.go:181] (0x40000f2000) (0x4000400960) Stream removed, broadcasting: 3\nI1118 07:41:05.325194 3620 log.go:181] (0x40000f2000) (0x40007240a0) Stream removed, broadcasting: 5\n" Nov 18 07:41:05.339: INFO: stdout: "\naffinity-nodeport-transition-76pv6\naffinity-nodeport-transition-76pv6\naffinity-nodeport-transition-76pv6\naffinity-nodeport-transition-76pv6\naffinity-nodeport-transition-76pv6\naffinity-nodeport-transition-76pv6\naffinity-nodeport-transition-76pv6\naffinity-nodeport-transition-76pv6\naffinity-nodeport-transition-76pv6\naffinity-nodeport-transition-76pv6\naffinity-nodeport-transition-76pv6\naffinity-nodeport-transition-76pv6\naffinity-nodeport-transition-76pv6\naffinity-nodeport-transition-76pv6\naffinity-nodeport-transition-76pv6\naffinity-nodeport-transition-76pv6" Nov 18 07:41:05.339: INFO: Received response from host: affinity-nodeport-transition-76pv6 Nov 18 07:41:05.339: INFO: Received response from host: affinity-nodeport-transition-76pv6 Nov 18 07:41:05.339: INFO: Received response from host: affinity-nodeport-transition-76pv6 Nov 18 07:41:05.339: INFO: Received response from host: affinity-nodeport-transition-76pv6 Nov 18 07:41:05.339: INFO: Received response from host: affinity-nodeport-transition-76pv6 Nov 18 07:41:05.339: INFO: Received response from host: affinity-nodeport-transition-76pv6 Nov 18 07:41:05.339: INFO: Received response from host: affinity-nodeport-transition-76pv6 Nov 18 07:41:05.339: INFO: Received response from host: affinity-nodeport-transition-76pv6 Nov 18 07:41:05.339: INFO: Received response from host: affinity-nodeport-transition-76pv6 Nov 18 07:41:05.339: INFO: Received response from host: affinity-nodeport-transition-76pv6 Nov 18 07:41:05.339: INFO: Received response from host: affinity-nodeport-transition-76pv6 Nov 18 07:41:05.339: INFO: Received response from host: affinity-nodeport-transition-76pv6 Nov 18 07:41:05.340: INFO: Received response from host: affinity-nodeport-transition-76pv6 Nov 18 07:41:05.340: INFO: Received response from host: affinity-nodeport-transition-76pv6 Nov 18 07:41:05.340: INFO: Received response from host: affinity-nodeport-transition-76pv6 Nov 18 07:41:05.340: INFO: Received response from host: affinity-nodeport-transition-76pv6 Nov 18 07:41:05.340: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-9044, will wait for the garbage collector to delete the pods Nov 18 07:41:05.466: INFO: Deleting ReplicationController affinity-nodeport-transition took: 7.940934ms Nov 18 07:41:05.967: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 500.973621ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:41:19.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9044" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:35.834 seconds] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":303,"completed":229,"skipped":3772,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:41:19.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-d914cad3-4f46-4507-9aec-cd85dc37eab9 STEP: Creating a pod to test consume secrets Nov 18 07:41:19.798: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-634cea0d-f0c7-48db-9fb7-0763b9e0226a" in namespace "projected-9385" to be "Succeeded or Failed" Nov 18 07:41:19.827: INFO: Pod "pod-projected-secrets-634cea0d-f0c7-48db-9fb7-0763b9e0226a": Phase="Pending", Reason="", readiness=false. Elapsed: 28.015092ms Nov 18 07:41:21.834: INFO: Pod "pod-projected-secrets-634cea0d-f0c7-48db-9fb7-0763b9e0226a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035445445s Nov 18 07:41:24.179: INFO: Pod "pod-projected-secrets-634cea0d-f0c7-48db-9fb7-0763b9e0226a": Phase="Running", Reason="", readiness=true. Elapsed: 4.380086147s Nov 18 07:41:26.190: INFO: Pod "pod-projected-secrets-634cea0d-f0c7-48db-9fb7-0763b9e0226a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.39157494s STEP: Saw pod success Nov 18 07:41:26.190: INFO: Pod "pod-projected-secrets-634cea0d-f0c7-48db-9fb7-0763b9e0226a" satisfied condition "Succeeded or Failed" Nov 18 07:41:26.217: INFO: Trying to get logs from node leguer-worker2 pod pod-projected-secrets-634cea0d-f0c7-48db-9fb7-0763b9e0226a container projected-secret-volume-test: STEP: delete the pod Nov 18 07:41:26.330: INFO: Waiting for pod pod-projected-secrets-634cea0d-f0c7-48db-9fb7-0763b9e0226a to disappear Nov 18 07:41:26.362: INFO: Pod pod-projected-secrets-634cea0d-f0c7-48db-9fb7-0763b9e0226a no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:41:26.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9385" for this suite. • [SLOW TEST:6.683 seconds] [sig-storage] Projected secret /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":230,"skipped":3786,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:41:26.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-1285/configmap-test-e3d8ba81-d199-4064-8507-fea390e2f202 STEP: Creating a pod to test consume configMaps Nov 18 07:41:26.513: INFO: Waiting up to 5m0s for pod "pod-configmaps-ffe38c25-c551-4cbe-9e97-98c108177e6c" in namespace "configmap-1285" to be "Succeeded or Failed" Nov 18 07:41:26.555: INFO: Pod "pod-configmaps-ffe38c25-c551-4cbe-9e97-98c108177e6c": Phase="Pending", Reason="", readiness=false. Elapsed: 41.702896ms Nov 18 07:41:28.683: INFO: Pod "pod-configmaps-ffe38c25-c551-4cbe-9e97-98c108177e6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.169728602s Nov 18 07:41:30.693: INFO: Pod "pod-configmaps-ffe38c25-c551-4cbe-9e97-98c108177e6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.179836996s STEP: Saw pod success Nov 18 07:41:30.694: INFO: Pod "pod-configmaps-ffe38c25-c551-4cbe-9e97-98c108177e6c" satisfied condition "Succeeded or Failed" Nov 18 07:41:30.698: INFO: Trying to get logs from node leguer-worker pod pod-configmaps-ffe38c25-c551-4cbe-9e97-98c108177e6c container env-test: STEP: delete the pod Nov 18 07:41:31.115: INFO: Waiting for pod pod-configmaps-ffe38c25-c551-4cbe-9e97-98c108177e6c to disappear Nov 18 07:41:31.126: INFO: Pod pod-configmaps-ffe38c25-c551-4cbe-9e97-98c108177e6c no longer exists [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:41:31.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1285" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":303,"completed":231,"skipped":3803,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:41:31.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Nov 18 07:41:36.319: INFO: Successfully updated pod "pod-update-activedeadlineseconds-be26bc71-5ab5-4892-b7df-1b1861c970ed" Nov 18 07:41:36.319: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-be26bc71-5ab5-4892-b7df-1b1861c970ed" in namespace "pods-514" to be "terminated due to deadline exceeded" Nov 18 07:41:36.364: INFO: Pod "pod-update-activedeadlineseconds-be26bc71-5ab5-4892-b7df-1b1861c970ed": Phase="Running", Reason="", readiness=true. Elapsed: 44.426537ms Nov 18 07:41:38.371: INFO: Pod "pod-update-activedeadlineseconds-be26bc71-5ab5-4892-b7df-1b1861c970ed": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.052114461s Nov 18 07:41:38.372: INFO: Pod "pod-update-activedeadlineseconds-be26bc71-5ab5-4892-b7df-1b1861c970ed" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:41:38.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-514" for this suite. • [SLOW TEST:7.246 seconds] [k8s.io] Pods /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":303,"completed":232,"skipped":3812,"failed":0} [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:41:38.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-secret-hkd7 STEP: Creating a pod to test atomic-volume-subpath Nov 18 07:41:38.503: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-hkd7" in namespace "subpath-7403" to be "Succeeded or Failed" Nov 18 07:41:38.509: INFO: Pod "pod-subpath-test-secret-hkd7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.089289ms Nov 18 07:41:40.518: INFO: Pod "pod-subpath-test-secret-hkd7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014979881s Nov 18 07:41:42.526: INFO: Pod "pod-subpath-test-secret-hkd7": Phase="Running", Reason="", readiness=true. Elapsed: 4.022436693s Nov 18 07:41:44.534: INFO: Pod "pod-subpath-test-secret-hkd7": Phase="Running", Reason="", readiness=true. Elapsed: 6.030821244s Nov 18 07:41:46.541: INFO: Pod "pod-subpath-test-secret-hkd7": Phase="Running", Reason="", readiness=true. Elapsed: 8.038342943s Nov 18 07:41:48.549: INFO: Pod "pod-subpath-test-secret-hkd7": Phase="Running", Reason="", readiness=true. Elapsed: 10.046093429s Nov 18 07:41:50.557: INFO: Pod "pod-subpath-test-secret-hkd7": Phase="Running", Reason="", readiness=true. Elapsed: 12.054274524s Nov 18 07:41:52.566: INFO: Pod "pod-subpath-test-secret-hkd7": Phase="Running", Reason="", readiness=true. Elapsed: 14.062665404s Nov 18 07:41:54.574: INFO: Pod "pod-subpath-test-secret-hkd7": Phase="Running", Reason="", readiness=true. Elapsed: 16.071015358s Nov 18 07:41:56.583: INFO: Pod "pod-subpath-test-secret-hkd7": Phase="Running", Reason="", readiness=true. Elapsed: 18.079667559s Nov 18 07:41:58.591: INFO: Pod "pod-subpath-test-secret-hkd7": Phase="Running", Reason="", readiness=true. Elapsed: 20.087912975s Nov 18 07:42:00.598: INFO: Pod "pod-subpath-test-secret-hkd7": Phase="Running", Reason="", readiness=true. Elapsed: 22.095206295s Nov 18 07:42:02.607: INFO: Pod "pod-subpath-test-secret-hkd7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.104159828s STEP: Saw pod success Nov 18 07:42:02.608: INFO: Pod "pod-subpath-test-secret-hkd7" satisfied condition "Succeeded or Failed" Nov 18 07:42:02.614: INFO: Trying to get logs from node leguer-worker2 pod pod-subpath-test-secret-hkd7 container test-container-subpath-secret-hkd7: STEP: delete the pod Nov 18 07:42:02.736: INFO: Waiting for pod pod-subpath-test-secret-hkd7 to disappear Nov 18 07:42:02.751: INFO: Pod pod-subpath-test-secret-hkd7 no longer exists STEP: Deleting pod pod-subpath-test-secret-hkd7 Nov 18 07:42:02.751: INFO: Deleting pod "pod-subpath-test-secret-hkd7" in namespace "subpath-7403" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:42:02.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7403" for this suite. • [SLOW TEST:24.378 seconds] [sig-storage] Subpath /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":303,"completed":233,"skipped":3812,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:42:02.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Nov 18 07:42:02.893: INFO: Waiting up to 1m0s for all nodes to be ready Nov 18 07:43:02.970: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:43:02.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:487 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. Nov 18 07:43:07.128: INFO: found a healthy node: leguer-worker2 [It] runs ReplicaSets to verify preemption running path [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 18 07:43:27.725: INFO: pods created so far: [1 1 1] Nov 18 07:43:27.726: INFO: length of pods created so far: 3 Nov 18 07:43:45.742: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:43:52.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-3364" for this suite. [AfterEach] PreemptionExecutionPath /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:461 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:43:52.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-3168" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:110.205 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:450 runs ReplicaSets to verify preemption running path [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":303,"completed":234,"skipped":3848,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:43:52.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating pod Nov 18 07:43:57.094: INFO: Pod pod-hostip-bfe5163c-0da3-4454-b4c8-267b1682b1df has hostIP: 172.18.0.18 [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:43:57.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9164" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":303,"completed":235,"skipped":3865,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:43:57.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-34c5385d-00dd-45e1-b92e-37a74cd450ee STEP: Creating a pod to test consume configMaps Nov 18 07:43:57.345: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6a9cb5d8-e359-4b9b-af61-a5be13fa3dcb" in namespace "projected-7165" to be "Succeeded or Failed" Nov 18 07:43:57.379: INFO: Pod "pod-projected-configmaps-6a9cb5d8-e359-4b9b-af61-a5be13fa3dcb": Phase="Pending", Reason="", readiness=false. Elapsed: 33.621253ms Nov 18 07:43:59.414: INFO: Pod "pod-projected-configmaps-6a9cb5d8-e359-4b9b-af61-a5be13fa3dcb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069454382s Nov 18 07:44:01.721: INFO: Pod "pod-projected-configmaps-6a9cb5d8-e359-4b9b-af61-a5be13fa3dcb": Phase="Running", Reason="", readiness=true. Elapsed: 4.376038471s Nov 18 07:44:03.852: INFO: Pod "pod-projected-configmaps-6a9cb5d8-e359-4b9b-af61-a5be13fa3dcb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.506759958s STEP: Saw pod success Nov 18 07:44:03.852: INFO: Pod "pod-projected-configmaps-6a9cb5d8-e359-4b9b-af61-a5be13fa3dcb" satisfied condition "Succeeded or Failed" Nov 18 07:44:03.859: INFO: Trying to get logs from node leguer-worker pod pod-projected-configmaps-6a9cb5d8-e359-4b9b-af61-a5be13fa3dcb container projected-configmap-volume-test: STEP: delete the pod Nov 18 07:44:03.920: INFO: Waiting for pod pod-projected-configmaps-6a9cb5d8-e359-4b9b-af61-a5be13fa3dcb to disappear Nov 18 07:44:03.931: INFO: Pod pod-projected-configmaps-6a9cb5d8-e359-4b9b-af61-a5be13fa3dcb no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:44:03.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7165" for this suite. • [SLOW TEST:6.829 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":236,"skipped":3917,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:44:03.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 18 07:44:04.104: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Nov 18 07:44:24.919: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7571 create -f -' Nov 18 07:44:31.573: INFO: stderr: "" Nov 18 07:44:31.574: INFO: stdout: "e2e-test-crd-publish-openapi-3040-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Nov 18 07:44:31.575: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7571 delete e2e-test-crd-publish-openapi-3040-crds test-cr' Nov 18 07:44:32.973: INFO: stderr: "" Nov 18 07:44:32.974: INFO: stdout: "e2e-test-crd-publish-openapi-3040-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Nov 18 07:44:32.974: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7571 apply -f -' Nov 18 07:44:35.501: INFO: stderr: "" Nov 18 07:44:35.501: INFO: stdout: "e2e-test-crd-publish-openapi-3040-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Nov 18 07:44:35.502: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7571 delete e2e-test-crd-publish-openapi-3040-crds test-cr' Nov 18 07:44:36.893: INFO: stderr: "" Nov 18 07:44:36.893: INFO: stdout: "e2e-test-crd-publish-openapi-3040-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Nov 18 07:44:36.893: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3040-crds' Nov 18 07:44:38.968: INFO: stderr: "" Nov 18 07:44:38.968: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3040-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:44:49.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7571" for this suite. • [SLOW TEST:45.958 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":303,"completed":237,"skipped":3919,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:44:49.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium Nov 18 07:44:50.004: INFO: Waiting up to 5m0s for pod "pod-3541daec-fd9a-4951-9067-e87be26f6097" in namespace "emptydir-7447" to be "Succeeded or Failed" Nov 18 07:44:50.024: INFO: Pod "pod-3541daec-fd9a-4951-9067-e87be26f6097": Phase="Pending", Reason="", readiness=false. Elapsed: 18.958902ms Nov 18 07:44:52.170: INFO: Pod "pod-3541daec-fd9a-4951-9067-e87be26f6097": Phase="Pending", Reason="", readiness=false. Elapsed: 2.165116886s Nov 18 07:44:55.235: INFO: Pod "pod-3541daec-fd9a-4951-9067-e87be26f6097": Phase="Running", Reason="", readiness=true. Elapsed: 5.230892688s Nov 18 07:44:57.242: INFO: Pod "pod-3541daec-fd9a-4951-9067-e87be26f6097": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.237719405s STEP: Saw pod success Nov 18 07:44:57.242: INFO: Pod "pod-3541daec-fd9a-4951-9067-e87be26f6097" satisfied condition "Succeeded or Failed" Nov 18 07:44:57.247: INFO: Trying to get logs from node leguer-worker pod pod-3541daec-fd9a-4951-9067-e87be26f6097 container test-container: STEP: delete the pod Nov 18 07:44:57.294: INFO: Waiting for pod pod-3541daec-fd9a-4951-9067-e87be26f6097 to disappear Nov 18 07:44:57.305: INFO: Pod pod-3541daec-fd9a-4951-9067-e87be26f6097 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:44:57.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7447" for this suite. • [SLOW TEST:7.412 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":238,"skipped":3950,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:44:57.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should support proxy with --port 0 [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting the proxy server Nov 18 07:44:57.521: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:44:58.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2477" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":303,"completed":239,"skipped":3962,"failed":0} SSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:44:58.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-26908c83-9b60-40b5-bb2c-2176a3231e4c in namespace container-probe-6031 Nov 18 07:45:03.076: INFO: Started pod busybox-26908c83-9b60-40b5-bb2c-2176a3231e4c in namespace container-probe-6031 STEP: checking the pod's current state and verifying that restartCount is present Nov 18 07:45:03.082: INFO: Initial restart count of pod busybox-26908c83-9b60-40b5-bb2c-2176a3231e4c is 0 Nov 18 07:45:49.270: INFO: Restart count of pod container-probe-6031/busybox-26908c83-9b60-40b5-bb2c-2176a3231e4c is now 1 (46.187636658s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:45:49.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6031" for this suite. • [SLOW TEST:50.419 seconds] [k8s.io] Probing container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":303,"completed":240,"skipped":3966,"failed":0} SSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:45:49.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Nov 18 07:45:49.457: INFO: Created pod &Pod{ObjectMeta:{dns-8662 dns-8662 /api/v1/namespaces/dns-8662/pods/dns-8662 6909fb57-2ff3-44a1-b4f0-0df1ff6f439a 12008506 0 2020-11-18 07:45:49 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2020-11-18 07:45:49 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bgpbd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bgpbd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bgpbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Nov 18 07:45:49.480: INFO: The status of Pod dns-8662 is Pending, waiting for it to be Running (with Ready = true) Nov 18 07:45:51.487: INFO: The status of Pod dns-8662 is Pending, waiting for it to be Running (with Ready = true) Nov 18 07:45:53.488: INFO: The status of Pod dns-8662 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Nov 18 07:45:53.489: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-8662 PodName:dns-8662 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 18 07:45:53.489: INFO: >>> kubeConfig: /root/.kube/config I1118 07:45:53.557926 10 log.go:181] (0x4000a9c000) (0x4002aa3720) Create stream I1118 07:45:53.558140 10 log.go:181] (0x4000a9c000) (0x4002aa3720) Stream added, broadcasting: 1 I1118 07:45:53.562223 10 log.go:181] (0x4000a9c000) Reply frame received for 1 I1118 07:45:53.562438 10 log.go:181] (0x4000a9c000) (0x40037c4d20) Create stream I1118 07:45:53.562560 10 log.go:181] (0x4000a9c000) (0x40037c4d20) Stream added, broadcasting: 3 I1118 07:45:53.564170 10 log.go:181] (0x4000a9c000) Reply frame received for 3 I1118 07:45:53.564326 10 log.go:181] (0x4000a9c000) (0x40037c4dc0) Create stream I1118 07:45:53.564403 10 log.go:181] (0x4000a9c000) (0x40037c4dc0) Stream added, broadcasting: 5 I1118 07:45:53.565810 10 log.go:181] (0x4000a9c000) Reply frame received for 5 I1118 07:45:53.692525 10 log.go:181] (0x4000a9c000) Data frame received for 3 I1118 07:45:53.692696 10 log.go:181] (0x40037c4d20) (3) Data frame handling I1118 07:45:53.693038 10 log.go:181] (0x40037c4d20) (3) Data frame sent I1118 07:45:53.694539 10 log.go:181] (0x4000a9c000) Data frame received for 5 I1118 07:45:53.694726 10 log.go:181] (0x40037c4dc0) (5) Data frame handling I1118 07:45:53.695005 10 log.go:181] (0x4000a9c000) Data frame received for 3 I1118 07:45:53.695203 10 log.go:181] (0x40037c4d20) (3) Data frame handling I1118 07:45:53.696989 10 log.go:181] (0x4000a9c000) Data frame received for 1 I1118 07:45:53.697104 10 log.go:181] (0x4002aa3720) (1) Data frame handling I1118 07:45:53.697220 10 log.go:181] (0x4002aa3720) (1) Data frame sent I1118 07:45:53.697377 10 log.go:181] (0x4000a9c000) (0x4002aa3720) Stream removed, broadcasting: 1 I1118 07:45:53.697546 10 log.go:181] (0x4000a9c000) Go away received I1118 07:45:53.697946 10 log.go:181] (0x4000a9c000) (0x4002aa3720) Stream removed, broadcasting: 1 I1118 07:45:53.698122 10 log.go:181] (0x4000a9c000) (0x40037c4d20) Stream removed, broadcasting: 3 I1118 07:45:53.698252 10 log.go:181] (0x4000a9c000) (0x40037c4dc0) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Nov 18 07:45:53.699: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-8662 PodName:dns-8662 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 18 07:45:53.699: INFO: >>> kubeConfig: /root/.kube/config I1118 07:45:53.764393 10 log.go:181] (0x4007312420) (0x4002c6bb80) Create stream I1118 07:45:53.764539 10 log.go:181] (0x4007312420) (0x4002c6bb80) Stream added, broadcasting: 1 I1118 07:45:53.770440 10 log.go:181] (0x4007312420) Reply frame received for 1 I1118 07:45:53.770795 10 log.go:181] (0x4007312420) (0x4002c6bc20) Create stream I1118 07:45:53.770954 10 log.go:181] (0x4007312420) (0x4002c6bc20) Stream added, broadcasting: 3 I1118 07:45:53.773107 10 log.go:181] (0x4007312420) Reply frame received for 3 I1118 07:45:53.773272 10 log.go:181] (0x4007312420) (0x4001ea2960) Create stream I1118 07:45:53.773361 10 log.go:181] (0x4007312420) (0x4001ea2960) Stream added, broadcasting: 5 I1118 07:45:53.775004 10 log.go:181] (0x4007312420) Reply frame received for 5 I1118 07:45:53.848506 10 log.go:181] (0x4007312420) Data frame received for 3 I1118 07:45:53.848649 10 log.go:181] (0x4002c6bc20) (3) Data frame handling I1118 07:45:53.848769 10 log.go:181] (0x4002c6bc20) (3) Data frame sent I1118 07:45:53.850379 10 log.go:181] (0x4007312420) Data frame received for 5 I1118 07:45:53.850604 10 log.go:181] (0x4001ea2960) (5) Data frame handling I1118 07:45:53.850950 10 log.go:181] (0x4007312420) Data frame received for 3 I1118 07:45:53.851171 10 log.go:181] (0x4002c6bc20) (3) Data frame handling I1118 07:45:53.852446 10 log.go:181] (0x4007312420) Data frame received for 1 I1118 07:45:53.852560 10 log.go:181] (0x4002c6bb80) (1) Data frame handling I1118 07:45:53.852754 10 log.go:181] (0x4002c6bb80) (1) Data frame sent I1118 07:45:53.853055 10 log.go:181] (0x4007312420) (0x4002c6bb80) Stream removed, broadcasting: 1 I1118 07:45:53.853261 10 log.go:181] (0x4007312420) Go away received I1118 07:45:53.853827 10 log.go:181] (0x4007312420) (0x4002c6bb80) Stream removed, broadcasting: 1 I1118 07:45:53.854041 10 log.go:181] (0x4007312420) (0x4002c6bc20) Stream removed, broadcasting: 3 I1118 07:45:53.854183 10 log.go:181] (0x4007312420) (0x4001ea2960) Stream removed, broadcasting: 5 Nov 18 07:45:53.854: INFO: Deleting pod dns-8662... [AfterEach] [sig-network] DNS /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:45:53.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8662" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":303,"completed":241,"skipped":3972,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:45:53.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W1118 07:45:55.437038 10 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 18 07:46:58.226: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:46:58.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4107" for this suite. • [SLOW TEST:64.296 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":303,"completed":242,"skipped":3980,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:46:58.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 18 07:46:58.305: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Nov 18 07:46:58.368: INFO: Pod name sample-pod: Found 0 pods out of 1 Nov 18 07:47:03.385: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Nov 18 07:47:03.386: INFO: Creating deployment "test-rolling-update-deployment" Nov 18 07:47:03.393: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Nov 18 07:47:03.439: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Nov 18 07:47:05.454: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Nov 18 07:47:05.459: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741282423, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741282423, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741282423, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741282423, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-c4cb8d6d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 18 07:47:07.466: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Nov 18 07:47:07.483: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-327 /apis/apps/v1/namespaces/deployment-327/deployments/test-rolling-update-deployment 1cd469fd-63db-4e6a-89b9-11dc20ad0403 12008868 1 2020-11-18 07:47:03 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2020-11-18 07:47:03 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-11-18 07:47:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x4006a98c18 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-11-18 07:47:03 +0000 UTC,LastTransitionTime:2020-11-18 07:47:03 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-c4cb8d6d9" has successfully progressed.,LastUpdateTime:2020-11-18 07:47:06 +0000 UTC,LastTransitionTime:2020-11-18 07:47:03 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Nov 18 07:47:07.490: INFO: New ReplicaSet "test-rolling-update-deployment-c4cb8d6d9" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-c4cb8d6d9 deployment-327 /apis/apps/v1/namespaces/deployment-327/replicasets/test-rolling-update-deployment-c4cb8d6d9 52a0e35d-8058-4b61-8f6e-b653de985716 12008857 1 2020-11-18 07:47:03 +0000 UTC map[name:sample-pod pod-template-hash:c4cb8d6d9] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 1cd469fd-63db-4e6a-89b9-11dc20ad0403 0x4006a99150 0x4006a99151}] [] [{kube-controller-manager Update apps/v1 2020-11-18 07:47:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1cd469fd-63db-4e6a-89b9-11dc20ad0403\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: c4cb8d6d9,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:c4cb8d6d9] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x4006a991c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Nov 18 07:47:07.490: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Nov 18 07:47:07.491: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-327 /apis/apps/v1/namespaces/deployment-327/replicasets/test-rolling-update-controller de0ac352-4c8b-4a7e-91dc-c15523930a6f 12008867 2 2020-11-18 07:46:58 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 1cd469fd-63db-4e6a-89b9-11dc20ad0403 0x4006a9903f 0x4006a99050}] [] [{e2e.test Update apps/v1 2020-11-18 07:46:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-11-18 07:47:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1cd469fd-63db-4e6a-89b9-11dc20ad0403\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0x4006a990e8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Nov 18 07:47:07.497: INFO: Pod "test-rolling-update-deployment-c4cb8d6d9-25ps5" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-c4cb8d6d9-25ps5 test-rolling-update-deployment-c4cb8d6d9- deployment-327 /api/v1/namespaces/deployment-327/pods/test-rolling-update-deployment-c4cb8d6d9-25ps5 b4aa3fe7-a427-49dd-bfe1-53f2005ec97d 12008856 0 2020-11-18 07:47:03 +0000 UTC map[name:sample-pod pod-template-hash:c4cb8d6d9] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-c4cb8d6d9 52a0e35d-8058-4b61-8f6e-b653de985716 0x4006a996b0 0x4006a996b1}] [] [{kube-controller-manager Update v1 2020-11-18 07:47:03 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"52a0e35d-8058-4b61-8f6e-b653de985716\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-11-18 07:47:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.253\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-m2d92,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-m2d92,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-m2d92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 07:47:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 07:47:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 07:47:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 07:47:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.17,PodIP:10.244.1.253,StartTime:2020-11-18 07:47:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-11-18 07:47:06 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://a24f6f2adb14697424fb7058e8c2abd381795978f2fd0002942a7d6c451b8b7f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.253,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:47:07.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-327" for this suite. • [SLOW TEST:9.263 seconds] [sig-apps] Deployment /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":303,"completed":243,"skipped":4021,"failed":0} SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:47:07.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-9ab5bce2-c7e8-4129-bbd8-68e07927eaab STEP: Creating a pod to test consume secrets Nov 18 07:47:07.737: INFO: Waiting up to 5m0s for pod "pod-secrets-ed9f4fbd-82aa-4981-a7f6-da94c96c39c1" in namespace "secrets-3409" to be "Succeeded or Failed" Nov 18 07:47:07.785: INFO: Pod "pod-secrets-ed9f4fbd-82aa-4981-a7f6-da94c96c39c1": Phase="Pending", Reason="", readiness=false. Elapsed: 47.421931ms Nov 18 07:47:09.791: INFO: Pod "pod-secrets-ed9f4fbd-82aa-4981-a7f6-da94c96c39c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053208349s Nov 18 07:47:11.799: INFO: Pod "pod-secrets-ed9f4fbd-82aa-4981-a7f6-da94c96c39c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061900172s STEP: Saw pod success Nov 18 07:47:11.800: INFO: Pod "pod-secrets-ed9f4fbd-82aa-4981-a7f6-da94c96c39c1" satisfied condition "Succeeded or Failed" Nov 18 07:47:11.805: INFO: Trying to get logs from node leguer-worker pod pod-secrets-ed9f4fbd-82aa-4981-a7f6-da94c96c39c1 container secret-volume-test: STEP: delete the pod Nov 18 07:47:12.039: INFO: Waiting for pod pod-secrets-ed9f4fbd-82aa-4981-a7f6-da94c96c39c1 to disappear Nov 18 07:47:12.054: INFO: Pod pod-secrets-ed9f4fbd-82aa-4981-a7f6-da94c96c39c1 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:47:12.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3409" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":244,"skipped":4025,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:47:12.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-7620 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-7620 I1118 07:47:12.353458 10 runners.go:190] Created replication controller with name: externalname-service, namespace: services-7620, replica count: 2 I1118 07:47:15.404688 10 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1118 07:47:18.405344 10 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 18 07:47:18.405: INFO: Creating new exec pod Nov 18 07:47:23.471: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-7620 execpodkjzkq -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Nov 18 07:47:25.158: INFO: stderr: "I1118 07:47:25.047814 3762 log.go:181] (0x400003a420) (0x40001161e0) Create stream\nI1118 07:47:25.054313 3762 log.go:181] (0x400003a420) (0x40001161e0) Stream added, broadcasting: 1\nI1118 07:47:25.070116 3762 log.go:181] (0x400003a420) Reply frame received for 1\nI1118 07:47:25.071042 3762 log.go:181] (0x400003a420) (0x4000a5e0a0) Create stream\nI1118 07:47:25.071144 3762 log.go:181] (0x400003a420) (0x4000a5e0a0) Stream added, broadcasting: 3\nI1118 07:47:25.074013 3762 log.go:181] (0x400003a420) Reply frame received for 3\nI1118 07:47:25.082105 3762 log.go:181] (0x400003a420) (0x40005221e0) Create stream\nI1118 07:47:25.082187 3762 log.go:181] (0x400003a420) (0x40005221e0) Stream added, broadcasting: 5\nI1118 07:47:25.083315 3762 log.go:181] (0x400003a420) Reply frame received for 5\nI1118 07:47:25.137643 3762 log.go:181] (0x400003a420) Data frame received for 3\nI1118 07:47:25.137919 3762 log.go:181] (0x400003a420) Data frame received for 1\nI1118 07:47:25.138033 3762 log.go:181] (0x4000a5e0a0) (3) Data frame handling\nI1118 07:47:25.138244 3762 log.go:181] (0x400003a420) Data frame received for 5\nI1118 07:47:25.138365 3762 log.go:181] (0x40005221e0) (5) Data frame handling\nI1118 07:47:25.138519 3762 log.go:181] (0x40001161e0) (1) Data frame handling\nI1118 07:47:25.139172 3762 log.go:181] (0x40005221e0) (5) Data frame sent\nI1118 07:47:25.139393 3762 log.go:181] (0x40001161e0) (1) Data frame sent\nI1118 07:47:25.140062 3762 log.go:181] (0x400003a420) Data frame received for 5\nI1118 07:47:25.140130 3762 log.go:181] (0x40005221e0) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI1118 07:47:25.142469 3762 log.go:181] (0x400003a420) (0x40001161e0) Stream removed, broadcasting: 1\nI1118 07:47:25.145834 3762 log.go:181] (0x400003a420) Go away received\nI1118 07:47:25.148715 3762 log.go:181] (0x400003a420) (0x40001161e0) Stream removed, broadcasting: 1\nI1118 07:47:25.149446 3762 log.go:181] (0x400003a420) (0x4000a5e0a0) Stream removed, broadcasting: 3\nI1118 07:47:25.149695 3762 log.go:181] (0x400003a420) (0x40005221e0) Stream removed, broadcasting: 5\n" Nov 18 07:47:25.160: INFO: stdout: "" Nov 18 07:47:25.166: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-7620 execpodkjzkq -- /bin/sh -x -c nc -zv -t -w 2 10.102.81.70 80' Nov 18 07:47:26.747: INFO: stderr: "I1118 07:47:26.609921 3782 log.go:181] (0x40003b8210) (0x40007bc000) Create stream\nI1118 07:47:26.613305 3782 log.go:181] (0x40003b8210) (0x40007bc000) Stream added, broadcasting: 1\nI1118 07:47:26.622697 3782 log.go:181] (0x40003b8210) Reply frame received for 1\nI1118 07:47:26.623276 3782 log.go:181] (0x40003b8210) (0x4000c82000) Create stream\nI1118 07:47:26.623335 3782 log.go:181] (0x40003b8210) (0x4000c82000) Stream added, broadcasting: 3\nI1118 07:47:26.624760 3782 log.go:181] (0x40003b8210) Reply frame received for 3\nI1118 07:47:26.625216 3782 log.go:181] (0x40003b8210) (0x400015e000) Create stream\nI1118 07:47:26.625314 3782 log.go:181] (0x40003b8210) (0x400015e000) Stream added, broadcasting: 5\nI1118 07:47:26.626663 3782 log.go:181] (0x40003b8210) Reply frame received for 5\nI1118 07:47:26.722281 3782 log.go:181] (0x40003b8210) Data frame received for 3\nI1118 07:47:26.723178 3782 log.go:181] (0x40003b8210) Data frame received for 1\nI1118 07:47:26.723358 3782 log.go:181] (0x40007bc000) (1) Data frame handling\nI1118 07:47:26.723873 3782 log.go:181] (0x4000c82000) (3) Data frame handling\nI1118 07:47:26.724271 3782 log.go:181] (0x40003b8210) Data frame received for 5\nI1118 07:47:26.724479 3782 log.go:181] (0x400015e000) (5) Data frame handling\n+ nc -zv -t -w 2 10.102.81.70 80\nConnection to 10.102.81.70 80 port [tcp/http] succeeded!\nI1118 07:47:26.726891 3782 log.go:181] (0x40007bc000) (1) Data frame sent\nI1118 07:47:26.727195 3782 log.go:181] (0x400015e000) (5) Data frame sent\nI1118 07:47:26.729516 3782 log.go:181] (0x40003b8210) Data frame received for 5\nI1118 07:47:26.730398 3782 log.go:181] (0x40003b8210) (0x40007bc000) Stream removed, broadcasting: 1\nI1118 07:47:26.731590 3782 log.go:181] (0x400015e000) (5) Data frame handling\nI1118 07:47:26.732717 3782 log.go:181] (0x40003b8210) Go away received\nI1118 07:47:26.737326 3782 log.go:181] (0x40003b8210) (0x40007bc000) Stream removed, broadcasting: 1\nI1118 07:47:26.737729 3782 log.go:181] (0x40003b8210) (0x4000c82000) Stream removed, broadcasting: 3\nI1118 07:47:26.738007 3782 log.go:181] (0x40003b8210) (0x400015e000) Stream removed, broadcasting: 5\n" Nov 18 07:47:26.748: INFO: stdout: "" Nov 18 07:47:26.749: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:47:26.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7620" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:14.764 seconds] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":303,"completed":245,"skipped":4045,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:47:26.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-42db93f2-e511-4a9d-b4d0-31bf97387b0a STEP: Creating a pod to test consume secrets Nov 18 07:47:26.940: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-63214c51-052e-43c0-bd8b-0ad485a191a3" in namespace "projected-9348" to be "Succeeded or Failed" Nov 18 07:47:26.962: INFO: Pod "pod-projected-secrets-63214c51-052e-43c0-bd8b-0ad485a191a3": Phase="Pending", Reason="", readiness=false. Elapsed: 21.449045ms Nov 18 07:47:28.969: INFO: Pod "pod-projected-secrets-63214c51-052e-43c0-bd8b-0ad485a191a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029100003s Nov 18 07:47:30.977: INFO: Pod "pod-projected-secrets-63214c51-052e-43c0-bd8b-0ad485a191a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037139024s STEP: Saw pod success Nov 18 07:47:30.978: INFO: Pod "pod-projected-secrets-63214c51-052e-43c0-bd8b-0ad485a191a3" satisfied condition "Succeeded or Failed" Nov 18 07:47:30.985: INFO: Trying to get logs from node leguer-worker2 pod pod-projected-secrets-63214c51-052e-43c0-bd8b-0ad485a191a3 container projected-secret-volume-test: STEP: delete the pod Nov 18 07:47:31.034: INFO: Waiting for pod pod-projected-secrets-63214c51-052e-43c0-bd8b-0ad485a191a3 to disappear Nov 18 07:47:31.135: INFO: Pod pod-projected-secrets-63214c51-052e-43c0-bd8b-0ad485a191a3 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:47:31.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9348" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":246,"skipped":4068,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:47:31.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 18 07:47:36.071: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 18 07:47:38.091: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741282456, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741282456, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741282456, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741282455, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 18 07:47:41.132: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:47:41.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-161" for this suite. STEP: Destroying namespace "webhook-161-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.031 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":303,"completed":247,"skipped":4081,"failed":0} SSSSSSS ------------------------------ [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:47:41.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod with failed condition STEP: updating the pod Nov 18 07:49:42.073: INFO: Successfully updated pod "var-expansion-4f64562b-cfb9-48ab-b420-cd9f64c42858" STEP: waiting for pod running STEP: deleting the pod gracefully Nov 18 07:49:44.124: INFO: Deleting pod "var-expansion-4f64562b-cfb9-48ab-b420-cd9f64c42858" in namespace "var-expansion-5522" Nov 18 07:49:44.130: INFO: Wait up to 5m0s for pod "var-expansion-4f64562b-cfb9-48ab-b420-cd9f64c42858" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:50:20.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5522" for this suite. • [SLOW TEST:158.793 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":303,"completed":248,"skipped":4088,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:50:20.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Nov 18 07:50:24.495: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:50:24.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3802" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":303,"completed":249,"skipped":4116,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:50:24.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Update Demo /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:308 [It] should create and stop a replication controller [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller Nov 18 07:50:24.648: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5849' Nov 18 07:50:26.884: INFO: stderr: "" Nov 18 07:50:26.884: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Nov 18 07:50:26.885: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5849' Nov 18 07:50:28.248: INFO: stderr: "" Nov 18 07:50:28.248: INFO: stdout: "update-demo-nautilus-6xksn update-demo-nautilus-tb2wb " Nov 18 07:50:28.249: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6xksn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5849' Nov 18 07:50:29.640: INFO: stderr: "" Nov 18 07:50:29.640: INFO: stdout: "" Nov 18 07:50:29.640: INFO: update-demo-nautilus-6xksn is created but not running Nov 18 07:50:34.641: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5849' Nov 18 07:50:36.051: INFO: stderr: "" Nov 18 07:50:36.051: INFO: stdout: "update-demo-nautilus-6xksn update-demo-nautilus-tb2wb " Nov 18 07:50:36.052: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6xksn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5849' Nov 18 07:50:37.563: INFO: stderr: "" Nov 18 07:50:37.564: INFO: stdout: "true" Nov 18 07:50:37.564: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6xksn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5849' Nov 18 07:50:38.970: INFO: stderr: "" Nov 18 07:50:38.971: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Nov 18 07:50:38.971: INFO: validating pod update-demo-nautilus-6xksn Nov 18 07:50:38.977: INFO: got data: { "image": "nautilus.jpg" } Nov 18 07:50:38.977: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Nov 18 07:50:38.977: INFO: update-demo-nautilus-6xksn is verified up and running Nov 18 07:50:38.977: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tb2wb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5849' Nov 18 07:50:40.386: INFO: stderr: "" Nov 18 07:50:40.386: INFO: stdout: "true" Nov 18 07:50:40.386: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tb2wb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5849' Nov 18 07:50:41.757: INFO: stderr: "" Nov 18 07:50:41.757: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Nov 18 07:50:41.757: INFO: validating pod update-demo-nautilus-tb2wb Nov 18 07:50:41.764: INFO: got data: { "image": "nautilus.jpg" } Nov 18 07:50:41.765: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Nov 18 07:50:41.765: INFO: update-demo-nautilus-tb2wb is verified up and running STEP: using delete to clean up resources Nov 18 07:50:41.765: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5849' Nov 18 07:50:44.326: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Nov 18 07:50:44.326: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Nov 18 07:50:44.326: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5849' Nov 18 07:50:45.774: INFO: stderr: "No resources found in kubectl-5849 namespace.\n" Nov 18 07:50:45.774: INFO: stdout: "" Nov 18 07:50:45.774: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5849 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Nov 18 07:50:47.158: INFO: stderr: "" Nov 18 07:50:47.158: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:50:47.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5849" for this suite. • [SLOW TEST:22.630 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:306 should create and stop a replication controller [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":303,"completed":250,"skipped":4133,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-instrumentation] Events API should delete a collection of events [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-instrumentation] Events API /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:50:47.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should delete a collection of events [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of events STEP: get a list of Events with a label in the current namespace STEP: delete a list of events Nov 18 07:50:47.300: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity [AfterEach] [sig-instrumentation] Events API /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:50:47.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-9849" for this suite. •{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":303,"completed":251,"skipped":4148,"failed":0} ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:50:47.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 18 07:52:47.527: INFO: Deleting pod "var-expansion-e76ab7cc-895c-4521-8212-c27b55674afd" in namespace "var-expansion-2184" Nov 18 07:52:47.535: INFO: Wait up to 5m0s for pod "var-expansion-e76ab7cc-895c-4521-8212-c27b55674afd" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:52:51.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2184" for this suite. • [SLOW TEST:124.371 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":303,"completed":252,"skipped":4148,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:52:51.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating replication controller my-hostname-basic-4941d915-39ad-47f4-94ee-741782e604cf Nov 18 07:52:51.942: INFO: Pod name my-hostname-basic-4941d915-39ad-47f4-94ee-741782e604cf: Found 0 pods out of 1 Nov 18 07:52:56.950: INFO: Pod name my-hostname-basic-4941d915-39ad-47f4-94ee-741782e604cf: Found 1 pods out of 1 Nov 18 07:52:56.950: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-4941d915-39ad-47f4-94ee-741782e604cf" are running Nov 18 07:52:56.956: INFO: Pod "my-hostname-basic-4941d915-39ad-47f4-94ee-741782e604cf-zrk7k" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-11-18 07:52:52 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-11-18 07:52:55 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-11-18 07:52:55 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-11-18 07:52:51 +0000 UTC Reason: Message:}]) Nov 18 07:52:56.959: INFO: Trying to dial the pod Nov 18 07:53:01.983: INFO: Controller my-hostname-basic-4941d915-39ad-47f4-94ee-741782e604cf: Got expected result from replica 1 [my-hostname-basic-4941d915-39ad-47f4-94ee-741782e604cf-zrk7k]: "my-hostname-basic-4941d915-39ad-47f4-94ee-741782e604cf-zrk7k", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:53:01.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3240" for this suite. • [SLOW TEST:10.221 seconds] [sig-apps] ReplicationController /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":303,"completed":253,"skipped":4207,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:53:02.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-c67dcb3b-33c6-47e8-bc3b-36cfd115ea87 STEP: Creating a pod to test consume secrets Nov 18 07:53:02.140: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9f15a5f1-d9e6-4b85-91f9-41e4a6f14556" in namespace "projected-5001" to be "Succeeded or Failed" Nov 18 07:53:02.218: INFO: Pod "pod-projected-secrets-9f15a5f1-d9e6-4b85-91f9-41e4a6f14556": Phase="Pending", Reason="", readiness=false. Elapsed: 77.978487ms Nov 18 07:53:04.225: INFO: Pod "pod-projected-secrets-9f15a5f1-d9e6-4b85-91f9-41e4a6f14556": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085245694s Nov 18 07:53:06.245: INFO: Pod "pod-projected-secrets-9f15a5f1-d9e6-4b85-91f9-41e4a6f14556": Phase="Running", Reason="", readiness=true. Elapsed: 4.104908917s Nov 18 07:53:08.257: INFO: Pod "pod-projected-secrets-9f15a5f1-d9e6-4b85-91f9-41e4a6f14556": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.117180212s STEP: Saw pod success Nov 18 07:53:08.258: INFO: Pod "pod-projected-secrets-9f15a5f1-d9e6-4b85-91f9-41e4a6f14556" satisfied condition "Succeeded or Failed" Nov 18 07:53:08.264: INFO: Trying to get logs from node leguer-worker2 pod pod-projected-secrets-9f15a5f1-d9e6-4b85-91f9-41e4a6f14556 container projected-secret-volume-test: STEP: delete the pod Nov 18 07:53:08.306: INFO: Waiting for pod pod-projected-secrets-9f15a5f1-d9e6-4b85-91f9-41e4a6f14556 to disappear Nov 18 07:53:08.310: INFO: Pod pod-projected-secrets-9f15a5f1-d9e6-4b85-91f9-41e4a6f14556 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:53:08.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5001" for this suite. • [SLOW TEST:6.336 seconds] [sig-storage] Projected secret /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":254,"skipped":4220,"failed":0} SSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:53:08.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-7bd7908b-4a0e-4105-8fe1-68af1033edc3 STEP: Creating secret with name s-test-opt-upd-f51693b3-4b80-4634-80bd-4c17fddf734d STEP: Creating the pod STEP: Deleting secret s-test-opt-del-7bd7908b-4a0e-4105-8fe1-68af1033edc3 STEP: Updating secret s-test-opt-upd-f51693b3-4b80-4634-80bd-4c17fddf734d STEP: Creating secret with name s-test-opt-create-e90d25ab-62c5-45ee-a8ca-9d1a78f9f95a STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:53:16.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4144" for this suite. • [SLOW TEST:8.345 seconds] [sig-storage] Secrets /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":255,"skipped":4224,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:53:16.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename certificates STEP: Waiting for a default service account to be provisioned in namespace [It] should support CSR API operations [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/certificates.k8s.io STEP: getting /apis/certificates.k8s.io/v1 STEP: creating STEP: getting STEP: listing STEP: watching Nov 18 07:53:19.541: INFO: starting watch STEP: patching STEP: updating Nov 18 07:53:19.581: INFO: waiting for watch events with expected annotations Nov 18 07:53:19.581: INFO: saw patched and updated annotations STEP: getting /approval STEP: patching /approval STEP: updating /approval STEP: getting /status STEP: patching /status STEP: updating /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:53:19.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "certificates-1426" for this suite. •{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":303,"completed":256,"skipped":4238,"failed":0} SSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:53:19.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7350 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7350;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7350 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7350;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7350.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7350.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7350.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7350.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7350.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-7350.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7350.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-7350.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7350.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-7350.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7350.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-7350.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7350.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 40.149.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.149.40_udp@PTR;check="$$(dig +tcp +noall +answer +search 40.149.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.149.40_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7350 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7350;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7350 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7350;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7350.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7350.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7350.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7350.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7350.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-7350.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7350.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-7350.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7350.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-7350.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7350.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-7350.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7350.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 40.149.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.149.40_udp@PTR;check="$$(dig +tcp +noall +answer +search 40.149.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.149.40_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Nov 18 07:53:32.038: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:32.042: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:32.047: INFO: Unable to read wheezy_udp@dns-test-service.dns-7350 from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:32.051: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7350 from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:32.055: INFO: Unable to read wheezy_udp@dns-test-service.dns-7350.svc from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:32.059: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7350.svc from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:32.065: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7350.svc from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:32.069: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7350.svc from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:32.097: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:32.102: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:32.107: INFO: Unable to read jessie_udp@dns-test-service.dns-7350 from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:32.111: INFO: Unable to read jessie_tcp@dns-test-service.dns-7350 from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:32.115: INFO: Unable to read jessie_udp@dns-test-service.dns-7350.svc from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:32.119: INFO: Unable to read jessie_tcp@dns-test-service.dns-7350.svc from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:32.122: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7350.svc from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:32.126: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7350.svc from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:32.159: INFO: Lookups using dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7350 wheezy_tcp@dns-test-service.dns-7350 wheezy_udp@dns-test-service.dns-7350.svc wheezy_tcp@dns-test-service.dns-7350.svc wheezy_udp@_http._tcp.dns-test-service.dns-7350.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7350.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7350 jessie_tcp@dns-test-service.dns-7350 jessie_udp@dns-test-service.dns-7350.svc jessie_tcp@dns-test-service.dns-7350.svc jessie_udp@_http._tcp.dns-test-service.dns-7350.svc jessie_tcp@_http._tcp.dns-test-service.dns-7350.svc] Nov 18 07:53:37.166: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:37.171: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:37.189: INFO: Unable to read wheezy_udp@dns-test-service.dns-7350 from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:37.194: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7350 from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:37.198: INFO: Unable to read wheezy_udp@dns-test-service.dns-7350.svc from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:37.217: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7350.svc from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:37.222: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7350.svc from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:37.226: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7350.svc from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:37.254: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:37.258: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:37.263: INFO: Unable to read jessie_udp@dns-test-service.dns-7350 from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:37.267: INFO: Unable to read jessie_tcp@dns-test-service.dns-7350 from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:37.270: INFO: Unable to read jessie_udp@dns-test-service.dns-7350.svc from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:37.275: INFO: Unable to read jessie_tcp@dns-test-service.dns-7350.svc from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:37.279: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7350.svc from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:37.282: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7350.svc from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:37.340: INFO: Lookups using dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7350 wheezy_tcp@dns-test-service.dns-7350 wheezy_udp@dns-test-service.dns-7350.svc wheezy_tcp@dns-test-service.dns-7350.svc wheezy_udp@_http._tcp.dns-test-service.dns-7350.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7350.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7350 jessie_tcp@dns-test-service.dns-7350 jessie_udp@dns-test-service.dns-7350.svc jessie_tcp@dns-test-service.dns-7350.svc jessie_udp@_http._tcp.dns-test-service.dns-7350.svc jessie_tcp@_http._tcp.dns-test-service.dns-7350.svc] Nov 18 07:53:43.128: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:43.187: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:43.193: INFO: Unable to read wheezy_udp@dns-test-service.dns-7350 from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:43.197: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7350 from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:43.201: INFO: Unable to read wheezy_udp@dns-test-service.dns-7350.svc from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:43.205: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7350.svc from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:43.208: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7350.svc from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:43.212: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7350.svc from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:43.558: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:43.561: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:43.564: INFO: Unable to read jessie_udp@dns-test-service.dns-7350 from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:43.568: INFO: Unable to read jessie_tcp@dns-test-service.dns-7350 from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:43.572: INFO: Unable to read jessie_udp@dns-test-service.dns-7350.svc from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:43.577: INFO: Unable to read jessie_tcp@dns-test-service.dns-7350.svc from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:43.580: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7350.svc from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:43.585: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7350.svc from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:43.610: INFO: Lookups using dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7350 wheezy_tcp@dns-test-service.dns-7350 wheezy_udp@dns-test-service.dns-7350.svc wheezy_tcp@dns-test-service.dns-7350.svc wheezy_udp@_http._tcp.dns-test-service.dns-7350.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7350.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7350 jessie_tcp@dns-test-service.dns-7350 jessie_udp@dns-test-service.dns-7350.svc jessie_tcp@dns-test-service.dns-7350.svc jessie_udp@_http._tcp.dns-test-service.dns-7350.svc jessie_tcp@_http._tcp.dns-test-service.dns-7350.svc] Nov 18 07:53:47.283: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:47.294: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:47.297: INFO: Unable to read wheezy_udp@dns-test-service.dns-7350 from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:47.301: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7350 from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:47.305: INFO: Unable to read wheezy_udp@dns-test-service.dns-7350.svc from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:47.308: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7350.svc from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:47.312: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7350.svc from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:47.317: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7350.svc from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:47.347: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:47.351: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:47.355: INFO: Unable to read jessie_udp@dns-test-service.dns-7350 from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:47.359: INFO: Unable to read jessie_tcp@dns-test-service.dns-7350 from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:47.367: INFO: Unable to read jessie_udp@dns-test-service.dns-7350.svc from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:47.371: INFO: Unable to read jessie_tcp@dns-test-service.dns-7350.svc from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:47.373: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7350.svc from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:47.376: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7350.svc from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:47.433: INFO: Lookups using dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7350 wheezy_tcp@dns-test-service.dns-7350 wheezy_udp@dns-test-service.dns-7350.svc wheezy_tcp@dns-test-service.dns-7350.svc wheezy_udp@_http._tcp.dns-test-service.dns-7350.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7350.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7350 jessie_tcp@dns-test-service.dns-7350 jessie_udp@dns-test-service.dns-7350.svc jessie_tcp@dns-test-service.dns-7350.svc jessie_udp@_http._tcp.dns-test-service.dns-7350.svc jessie_tcp@_http._tcp.dns-test-service.dns-7350.svc] Nov 18 07:53:52.166: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:52.170: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:52.174: INFO: Unable to read wheezy_udp@dns-test-service.dns-7350 from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:52.177: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7350 from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:52.180: INFO: Unable to read wheezy_udp@dns-test-service.dns-7350.svc from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:52.183: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7350.svc from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:52.186: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7350.svc from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:52.189: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7350.svc from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:52.211: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:52.215: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:52.219: INFO: Unable to read jessie_udp@dns-test-service.dns-7350 from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:52.223: INFO: Unable to read jessie_tcp@dns-test-service.dns-7350 from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:52.227: INFO: Unable to read jessie_udp@dns-test-service.dns-7350.svc from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:52.230: INFO: Unable to read jessie_tcp@dns-test-service.dns-7350.svc from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:52.233: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7350.svc from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:52.236: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7350.svc from pod dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d: the server could not find the requested resource (get pods dns-test-9212347d-345d-4cdf-8457-13f88cabef1d) Nov 18 07:53:52.250: INFO: Lookups using dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7350 wheezy_tcp@dns-test-service.dns-7350 wheezy_udp@dns-test-service.dns-7350.svc wheezy_tcp@dns-test-service.dns-7350.svc wheezy_udp@_http._tcp.dns-test-service.dns-7350.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7350.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7350 jessie_tcp@dns-test-service.dns-7350 jessie_udp@dns-test-service.dns-7350.svc jessie_tcp@dns-test-service.dns-7350.svc jessie_udp@_http._tcp.dns-test-service.dns-7350.svc jessie_tcp@_http._tcp.dns-test-service.dns-7350.svc] Nov 18 07:53:57.309: INFO: DNS probes using dns-7350/dns-test-9212347d-345d-4cdf-8457-13f88cabef1d succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:54:00.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7350" for this suite. • [SLOW TEST:40.405 seconds] [sig-network] DNS /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":303,"completed":257,"skipped":4245,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:54:00.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Nov 18 07:54:06.211: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:54:06.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2726" for this suite. • [SLOW TEST:6.119 seconds] [k8s.io] Container Runtime /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":303,"completed":258,"skipped":4289,"failed":0} SSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:54:06.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 18 07:54:06.385: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:54:10.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9630" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":303,"completed":259,"skipped":4292,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:54:10.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-451d4ea9-fc2a-4687-8097-3f99bea66fa1 STEP: Creating a pod to test consume secrets Nov 18 07:54:10.637: INFO: Waiting up to 5m0s for pod "pod-secrets-05e29e26-a28d-411c-8e90-943b612eb2fc" in namespace "secrets-9930" to be "Succeeded or Failed" Nov 18 07:54:10.693: INFO: Pod "pod-secrets-05e29e26-a28d-411c-8e90-943b612eb2fc": Phase="Pending", Reason="", readiness=false. Elapsed: 55.717178ms Nov 18 07:54:12.702: INFO: Pod "pod-secrets-05e29e26-a28d-411c-8e90-943b612eb2fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064159402s Nov 18 07:54:14.708: INFO: Pod "pod-secrets-05e29e26-a28d-411c-8e90-943b612eb2fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.070269994s STEP: Saw pod success Nov 18 07:54:14.708: INFO: Pod "pod-secrets-05e29e26-a28d-411c-8e90-943b612eb2fc" satisfied condition "Succeeded or Failed" Nov 18 07:54:14.712: INFO: Trying to get logs from node leguer-worker2 pod pod-secrets-05e29e26-a28d-411c-8e90-943b612eb2fc container secret-volume-test: STEP: delete the pod Nov 18 07:54:14.755: INFO: Waiting for pod pod-secrets-05e29e26-a28d-411c-8e90-943b612eb2fc to disappear Nov 18 07:54:14.888: INFO: Pod pod-secrets-05e29e26-a28d-411c-8e90-943b612eb2fc no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:54:14.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9930" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":260,"skipped":4302,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:54:15.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Nov 18 07:54:18.686: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Nov 18 07:54:20.784: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741282858, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741282858, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741282858, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741282858, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-85d57b96d6\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 18 07:54:22.791: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741282858, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741282858, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741282858, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741282858, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-85d57b96d6\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 18 07:54:25.828: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 18 07:54:25.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:54:26.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-1726" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:11.997 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":303,"completed":261,"skipped":4329,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:54:27.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs Nov 18 07:54:27.412: INFO: Waiting up to 5m0s for pod "pod-3658495c-04f2-49a9-9110-8b01aca8e896" in namespace "emptydir-515" to be "Succeeded or Failed" Nov 18 07:54:27.466: INFO: Pod "pod-3658495c-04f2-49a9-9110-8b01aca8e896": Phase="Pending", Reason="", readiness=false. Elapsed: 53.43908ms Nov 18 07:54:29.565: INFO: Pod "pod-3658495c-04f2-49a9-9110-8b01aca8e896": Phase="Pending", Reason="", readiness=false. Elapsed: 2.152574805s Nov 18 07:54:31.571: INFO: Pod "pod-3658495c-04f2-49a9-9110-8b01aca8e896": Phase="Pending", Reason="", readiness=false. Elapsed: 4.158506431s Nov 18 07:54:33.580: INFO: Pod "pod-3658495c-04f2-49a9-9110-8b01aca8e896": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.167473456s STEP: Saw pod success Nov 18 07:54:33.580: INFO: Pod "pod-3658495c-04f2-49a9-9110-8b01aca8e896" satisfied condition "Succeeded or Failed" Nov 18 07:54:33.586: INFO: Trying to get logs from node leguer-worker2 pod pod-3658495c-04f2-49a9-9110-8b01aca8e896 container test-container: STEP: delete the pod Nov 18 07:54:33.635: INFO: Waiting for pod pod-3658495c-04f2-49a9-9110-8b01aca8e896 to disappear Nov 18 07:54:33.660: INFO: Pod pod-3658495c-04f2-49a9-9110-8b01aca8e896 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:54:33.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-515" for this suite. • [SLOW TEST:6.595 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":262,"skipped":4343,"failed":0} S ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:54:33.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Nov 18 07:54:33.908: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a8e82348-1342-40e5-98df-3994212e1240" in namespace "downward-api-9350" to be "Succeeded or Failed" Nov 18 07:54:33.926: INFO: Pod "downwardapi-volume-a8e82348-1342-40e5-98df-3994212e1240": Phase="Pending", Reason="", readiness=false. Elapsed: 17.238544ms Nov 18 07:54:35.997: INFO: Pod "downwardapi-volume-a8e82348-1342-40e5-98df-3994212e1240": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088996122s Nov 18 07:54:38.020: INFO: Pod "downwardapi-volume-a8e82348-1342-40e5-98df-3994212e1240": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.111770628s STEP: Saw pod success Nov 18 07:54:38.020: INFO: Pod "downwardapi-volume-a8e82348-1342-40e5-98df-3994212e1240" satisfied condition "Succeeded or Failed" Nov 18 07:54:38.026: INFO: Trying to get logs from node leguer-worker2 pod downwardapi-volume-a8e82348-1342-40e5-98df-3994212e1240 container client-container: STEP: delete the pod Nov 18 07:54:38.240: INFO: Waiting for pod downwardapi-volume-a8e82348-1342-40e5-98df-3994212e1240 to disappear Nov 18 07:54:38.247: INFO: Pod downwardapi-volume-a8e82348-1342-40e5-98df-3994212e1240 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:54:38.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9350" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":303,"completed":263,"skipped":4344,"failed":0} SSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:54:38.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Nov 18 07:54:44.981: INFO: Successfully updated pod "adopt-release-fnf4r" STEP: Checking that the Job readopts the Pod Nov 18 07:54:44.981: INFO: Waiting up to 15m0s for pod "adopt-release-fnf4r" in namespace "job-9561" to be "adopted" Nov 18 07:54:45.033: INFO: Pod "adopt-release-fnf4r": Phase="Running", Reason="", readiness=true. Elapsed: 51.98466ms Nov 18 07:54:47.041: INFO: Pod "adopt-release-fnf4r": Phase="Running", Reason="", readiness=true. Elapsed: 2.060222475s Nov 18 07:54:47.042: INFO: Pod "adopt-release-fnf4r" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Nov 18 07:54:47.560: INFO: Successfully updated pod "adopt-release-fnf4r" STEP: Checking that the Job releases the Pod Nov 18 07:54:47.561: INFO: Waiting up to 15m0s for pod "adopt-release-fnf4r" in namespace "job-9561" to be "released" Nov 18 07:54:47.593: INFO: Pod "adopt-release-fnf4r": Phase="Running", Reason="", readiness=true. Elapsed: 31.710013ms Nov 18 07:54:49.655: INFO: Pod "adopt-release-fnf4r": Phase="Running", Reason="", readiness=true. Elapsed: 2.093861868s Nov 18 07:54:49.655: INFO: Pod "adopt-release-fnf4r" satisfied condition "released" [AfterEach] [sig-apps] Job /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:54:49.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-9561" for this suite. • [SLOW TEST:11.411 seconds] [sig-apps] Job /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":303,"completed":264,"skipped":4350,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:54:49.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Nov 18 07:54:49.920: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fac3e921-73f0-4c0a-a003-1a28be2fa1a3" in namespace "projected-4158" to be "Succeeded or Failed" Nov 18 07:54:49.938: INFO: Pod "downwardapi-volume-fac3e921-73f0-4c0a-a003-1a28be2fa1a3": Phase="Pending", Reason="", readiness=false. Elapsed: 17.83703ms Nov 18 07:54:51.947: INFO: Pod "downwardapi-volume-fac3e921-73f0-4c0a-a003-1a28be2fa1a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027219926s Nov 18 07:54:53.954: INFO: Pod "downwardapi-volume-fac3e921-73f0-4c0a-a003-1a28be2fa1a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033960976s STEP: Saw pod success Nov 18 07:54:53.954: INFO: Pod "downwardapi-volume-fac3e921-73f0-4c0a-a003-1a28be2fa1a3" satisfied condition "Succeeded or Failed" Nov 18 07:54:53.959: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-fac3e921-73f0-4c0a-a003-1a28be2fa1a3 container client-container: STEP: delete the pod Nov 18 07:54:54.006: INFO: Waiting for pod downwardapi-volume-fac3e921-73f0-4c0a-a003-1a28be2fa1a3 to disappear Nov 18 07:54:54.014: INFO: Pod downwardapi-volume-fac3e921-73f0-4c0a-a003-1a28be2fa1a3 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:54:54.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4158" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":303,"completed":265,"skipped":4362,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:54:54.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-5576.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-5576.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5576.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-5576.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-5576.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5576.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Nov 18 07:55:00.200: INFO: DNS probes using dns-5576/dns-test-94c78f55-3fa5-41b4-8dad-088761096633 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:55:00.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5576" for this suite. • [SLOW TEST:6.315 seconds] [sig-network] DNS /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":303,"completed":266,"skipped":4398,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:55:00.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-6ee45680-5f72-46e8-b083-a7e7604cadaf in namespace container-probe-7187 Nov 18 07:55:04.797: INFO: Started pod liveness-6ee45680-5f72-46e8-b083-a7e7604cadaf in namespace container-probe-7187 STEP: checking the pod's current state and verifying that restartCount is present Nov 18 07:55:04.801: INFO: Initial restart count of pod liveness-6ee45680-5f72-46e8-b083-a7e7604cadaf is 0 Nov 18 07:55:32.373: INFO: Restart count of pod container-probe-7187/liveness-6ee45680-5f72-46e8-b083-a7e7604cadaf is now 1 (27.572627988s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:55:32.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7187" for this suite. • [SLOW TEST:32.097 seconds] [k8s.io] Probing container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":303,"completed":267,"skipped":4409,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:55:32.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 18 07:55:32.841: INFO: Create a RollingUpdate DaemonSet Nov 18 07:55:32.849: INFO: Check that daemon pods launch on every node of the cluster Nov 18 07:55:33.001: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 07:55:33.145: INFO: Number of nodes with available pods: 0 Nov 18 07:55:33.145: INFO: Node leguer-worker is running more than one daemon pod Nov 18 07:55:34.154: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 07:55:34.159: INFO: Number of nodes with available pods: 0 Nov 18 07:55:34.159: INFO: Node leguer-worker is running more than one daemon pod Nov 18 07:55:35.294: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 07:55:35.300: INFO: Number of nodes with available pods: 0 Nov 18 07:55:35.300: INFO: Node leguer-worker is running more than one daemon pod Nov 18 07:55:36.155: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 07:55:36.161: INFO: Number of nodes with available pods: 0 Nov 18 07:55:36.161: INFO: Node leguer-worker is running more than one daemon pod Nov 18 07:55:37.157: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 07:55:37.165: INFO: Number of nodes with available pods: 1 Nov 18 07:55:37.165: INFO: Node leguer-worker is running more than one daemon pod Nov 18 07:55:38.158: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 07:55:38.164: INFO: Number of nodes with available pods: 2 Nov 18 07:55:38.165: INFO: Number of running nodes: 2, number of available pods: 2 Nov 18 07:55:38.165: INFO: Update the DaemonSet to trigger a rollout Nov 18 07:55:38.177: INFO: Updating DaemonSet daemon-set Nov 18 07:55:51.205: INFO: Roll back the DaemonSet before rollout is complete Nov 18 07:55:51.216: INFO: Updating DaemonSet daemon-set Nov 18 07:55:51.216: INFO: Make sure DaemonSet rollback is complete Nov 18 07:55:51.226: INFO: Wrong image for pod: daemon-set-rqh8x. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Nov 18 07:55:51.226: INFO: Pod daemon-set-rqh8x is not available Nov 18 07:55:51.249: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 07:55:52.259: INFO: Wrong image for pod: daemon-set-rqh8x. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Nov 18 07:55:52.259: INFO: Pod daemon-set-rqh8x is not available Nov 18 07:55:52.270: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 07:55:53.260: INFO: Wrong image for pod: daemon-set-rqh8x. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Nov 18 07:55:53.260: INFO: Pod daemon-set-rqh8x is not available Nov 18 07:55:53.270: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 07:55:54.260: INFO: Pod daemon-set-mzthk is not available Nov 18 07:55:54.271: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7099, will wait for the garbage collector to delete the pods Nov 18 07:55:54.347: INFO: Deleting DaemonSet.extensions daemon-set took: 9.090484ms Nov 18 07:55:54.848: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.576128ms Nov 18 07:55:58.360: INFO: Number of nodes with available pods: 0 Nov 18 07:55:58.361: INFO: Number of running nodes: 0, number of available pods: 0 Nov 18 07:55:58.365: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7099/daemonsets","resourceVersion":"12011409"},"items":null} Nov 18 07:55:58.369: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7099/pods","resourceVersion":"12011409"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:55:58.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7099" for this suite. • [SLOW TEST:25.957 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":303,"completed":268,"skipped":4422,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:55:58.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token STEP: reading a file in the container Nov 18 07:56:03.114: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8117 pod-service-account-36bb8188-bdd4-44df-bb86-821cacf51751 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Nov 18 07:56:08.874: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8117 pod-service-account-36bb8188-bdd4-44df-bb86-821cacf51751 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Nov 18 07:56:10.450: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8117 pod-service-account-36bb8188-bdd4-44df-bb86-821cacf51751 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:56:12.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-8117" for this suite. • [SLOW TEST:13.743 seconds] [sig-auth] ServiceAccounts /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":303,"completed":269,"skipped":4448,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:56:12.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 18 07:56:12.326: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Nov 18 07:56:12.343: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 07:56:12.383: INFO: Number of nodes with available pods: 0 Nov 18 07:56:12.383: INFO: Node leguer-worker is running more than one daemon pod Nov 18 07:56:13.397: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 07:56:13.404: INFO: Number of nodes with available pods: 0 Nov 18 07:56:13.404: INFO: Node leguer-worker is running more than one daemon pod Nov 18 07:56:14.533: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 07:56:14.540: INFO: Number of nodes with available pods: 0 Nov 18 07:56:14.540: INFO: Node leguer-worker is running more than one daemon pod Nov 18 07:56:15.397: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 07:56:15.404: INFO: Number of nodes with available pods: 0 Nov 18 07:56:15.404: INFO: Node leguer-worker is running more than one daemon pod Nov 18 07:56:16.415: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 07:56:16.422: INFO: Number of nodes with available pods: 2 Nov 18 07:56:16.422: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Nov 18 07:56:16.498: INFO: Wrong image for pod: daemon-set-8mmgx. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Nov 18 07:56:16.498: INFO: Wrong image for pod: daemon-set-wz2kg. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Nov 18 07:56:16.545: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 07:56:17.553: INFO: Wrong image for pod: daemon-set-8mmgx. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Nov 18 07:56:17.553: INFO: Wrong image for pod: daemon-set-wz2kg. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Nov 18 07:56:17.560: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 07:56:18.553: INFO: Wrong image for pod: daemon-set-8mmgx. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Nov 18 07:56:18.553: INFO: Wrong image for pod: daemon-set-wz2kg. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Nov 18 07:56:18.563: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 07:56:19.554: INFO: Wrong image for pod: daemon-set-8mmgx. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Nov 18 07:56:19.554: INFO: Wrong image for pod: daemon-set-wz2kg. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Nov 18 07:56:19.564: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 07:56:20.556: INFO: Wrong image for pod: daemon-set-8mmgx. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Nov 18 07:56:20.556: INFO: Pod daemon-set-8mmgx is not available Nov 18 07:56:20.556: INFO: Wrong image for pod: daemon-set-wz2kg. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Nov 18 07:56:20.567: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 07:56:21.556: INFO: Wrong image for pod: daemon-set-8mmgx. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Nov 18 07:56:21.556: INFO: Pod daemon-set-8mmgx is not available Nov 18 07:56:21.556: INFO: Wrong image for pod: daemon-set-wz2kg. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Nov 18 07:56:21.567: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 07:56:22.557: INFO: Wrong image for pod: daemon-set-8mmgx. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Nov 18 07:56:22.557: INFO: Pod daemon-set-8mmgx is not available Nov 18 07:56:22.557: INFO: Wrong image for pod: daemon-set-wz2kg. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Nov 18 07:56:22.567: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 07:56:23.556: INFO: Wrong image for pod: daemon-set-8mmgx. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Nov 18 07:56:23.556: INFO: Pod daemon-set-8mmgx is not available Nov 18 07:56:23.556: INFO: Wrong image for pod: daemon-set-wz2kg. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Nov 18 07:56:23.567: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 07:56:24.553: INFO: Wrong image for pod: daemon-set-8mmgx. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Nov 18 07:56:24.553: INFO: Pod daemon-set-8mmgx is not available Nov 18 07:56:24.553: INFO: Wrong image for pod: daemon-set-wz2kg. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Nov 18 07:56:24.559: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 07:56:25.556: INFO: Wrong image for pod: daemon-set-8mmgx. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Nov 18 07:56:25.556: INFO: Pod daemon-set-8mmgx is not available Nov 18 07:56:25.556: INFO: Wrong image for pod: daemon-set-wz2kg. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Nov 18 07:56:25.567: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 07:56:26.556: INFO: Wrong image for pod: daemon-set-8mmgx. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Nov 18 07:56:26.556: INFO: Pod daemon-set-8mmgx is not available Nov 18 07:56:26.556: INFO: Wrong image for pod: daemon-set-wz2kg. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Nov 18 07:56:26.566: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 07:56:27.556: INFO: Wrong image for pod: daemon-set-8mmgx. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Nov 18 07:56:27.556: INFO: Pod daemon-set-8mmgx is not available Nov 18 07:56:27.556: INFO: Wrong image for pod: daemon-set-wz2kg. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Nov 18 07:56:27.566: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 07:56:28.555: INFO: Wrong image for pod: daemon-set-8mmgx. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Nov 18 07:56:28.556: INFO: Pod daemon-set-8mmgx is not available Nov 18 07:56:28.556: INFO: Wrong image for pod: daemon-set-wz2kg. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Nov 18 07:56:28.564: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 07:56:29.554: INFO: Wrong image for pod: daemon-set-8mmgx. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Nov 18 07:56:29.555: INFO: Pod daemon-set-8mmgx is not available Nov 18 07:56:29.555: INFO: Wrong image for pod: daemon-set-wz2kg. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Nov 18 07:56:29.565: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 07:56:30.556: INFO: Pod daemon-set-4d7sw is not available Nov 18 07:56:30.556: INFO: Wrong image for pod: daemon-set-wz2kg. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Nov 18 07:56:30.566: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 07:56:31.573: INFO: Pod daemon-set-4d7sw is not available Nov 18 07:56:31.573: INFO: Wrong image for pod: daemon-set-wz2kg. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Nov 18 07:56:31.586: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 07:56:32.554: INFO: Pod daemon-set-4d7sw is not available Nov 18 07:56:32.554: INFO: Wrong image for pod: daemon-set-wz2kg. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Nov 18 07:56:32.563: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 07:56:33.555: INFO: Wrong image for pod: daemon-set-wz2kg. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Nov 18 07:56:33.596: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 07:56:34.556: INFO: Wrong image for pod: daemon-set-wz2kg. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Nov 18 07:56:34.556: INFO: Pod daemon-set-wz2kg is not available Nov 18 07:56:34.566: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 07:56:35.576: INFO: Wrong image for pod: daemon-set-wz2kg. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Nov 18 07:56:35.577: INFO: Pod daemon-set-wz2kg is not available Nov 18 07:56:35.587: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 07:56:36.554: INFO: Wrong image for pod: daemon-set-wz2kg. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Nov 18 07:56:36.554: INFO: Pod daemon-set-wz2kg is not available Nov 18 07:56:36.565: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 07:56:37.553: INFO: Wrong image for pod: daemon-set-wz2kg. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Nov 18 07:56:37.553: INFO: Pod daemon-set-wz2kg is not available Nov 18 07:56:37.562: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 07:56:38.557: INFO: Wrong image for pod: daemon-set-wz2kg. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Nov 18 07:56:38.557: INFO: Pod daemon-set-wz2kg is not available Nov 18 07:56:38.567: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 07:56:39.592: INFO: Pod daemon-set-lv9jc is not available Nov 18 07:56:39.608: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Nov 18 07:56:39.683: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 07:56:39.723: INFO: Number of nodes with available pods: 1 Nov 18 07:56:39.723: INFO: Node leguer-worker2 is running more than one daemon pod Nov 18 07:56:40.734: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 07:56:40.742: INFO: Number of nodes with available pods: 1 Nov 18 07:56:40.743: INFO: Node leguer-worker2 is running more than one daemon pod Nov 18 07:56:41.745: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 07:56:41.963: INFO: Number of nodes with available pods: 1 Nov 18 07:56:41.963: INFO: Node leguer-worker2 is running more than one daemon pod Nov 18 07:56:42.734: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 07:56:42.741: INFO: Number of nodes with available pods: 1 Nov 18 07:56:42.741: INFO: Node leguer-worker2 is running more than one daemon pod Nov 18 07:56:43.732: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 18 07:56:43.737: INFO: Number of nodes with available pods: 2 Nov 18 07:56:43.737: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3034, will wait for the garbage collector to delete the pods Nov 18 07:56:43.885: INFO: Deleting DaemonSet.extensions daemon-set took: 9.253078ms Nov 18 07:56:44.286: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.680709ms Nov 18 07:56:49.593: INFO: Number of nodes with available pods: 0 Nov 18 07:56:49.593: INFO: Number of running nodes: 0, number of available pods: 0 Nov 18 07:56:49.598: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3034/daemonsets","resourceVersion":"12011692"},"items":null} Nov 18 07:56:49.603: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3034/pods","resourceVersion":"12011692"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:56:49.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3034" for this suite. • [SLOW TEST:37.507 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":303,"completed":270,"skipped":4474,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:56:49.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-19927a83-0b6d-4224-b9a3-6f39c307e6fa STEP: Creating a pod to test consume secrets Nov 18 07:56:49.837: INFO: Waiting up to 5m0s for pod "pod-secrets-5bac0afc-69fd-42f5-93e5-7836eb3bdb15" in namespace "secrets-6990" to be "Succeeded or Failed" Nov 18 07:56:49.862: INFO: Pod "pod-secrets-5bac0afc-69fd-42f5-93e5-7836eb3bdb15": Phase="Pending", Reason="", readiness=false. Elapsed: 23.881306ms Nov 18 07:56:51.868: INFO: Pod "pod-secrets-5bac0afc-69fd-42f5-93e5-7836eb3bdb15": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030186304s Nov 18 07:56:53.892: INFO: Pod "pod-secrets-5bac0afc-69fd-42f5-93e5-7836eb3bdb15": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054614871s STEP: Saw pod success Nov 18 07:56:53.893: INFO: Pod "pod-secrets-5bac0afc-69fd-42f5-93e5-7836eb3bdb15" satisfied condition "Succeeded or Failed" Nov 18 07:56:53.900: INFO: Trying to get logs from node leguer-worker pod pod-secrets-5bac0afc-69fd-42f5-93e5-7836eb3bdb15 container secret-volume-test: STEP: delete the pod Nov 18 07:56:53.953: INFO: Waiting for pod pod-secrets-5bac0afc-69fd-42f5-93e5-7836eb3bdb15 to disappear Nov 18 07:56:53.991: INFO: Pod pod-secrets-5bac0afc-69fd-42f5-93e5-7836eb3bdb15 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:56:53.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6990" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":271,"skipped":4516,"failed":0} SSSS ------------------------------ [sig-api-machinery] server version should find the server version [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] server version /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:56:54.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename server-version STEP: Waiting for a default service account to be provisioned in namespace [It] should find the server version [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Request ServerVersion STEP: Confirm major version Nov 18 07:56:54.175: INFO: Major version: 1 STEP: Confirm minor version Nov 18 07:56:54.176: INFO: cleanMinorVersion: 19 Nov 18 07:56:54.177: INFO: Minor version: 19 [AfterEach] [sig-api-machinery] server version /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:56:54.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "server-version-6912" for this suite. •{"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":303,"completed":272,"skipped":4520,"failed":0} SSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:56:54.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:56:58.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8449" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":273,"skipped":4526,"failed":0} SS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:56:58.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-1552/configmap-test-4ada5f18-df26-4996-b997-38465e21b677 STEP: Creating a pod to test consume configMaps Nov 18 07:56:58.566: INFO: Waiting up to 5m0s for pod "pod-configmaps-75258fb3-8eb1-483d-a631-6d9975f321ff" in namespace "configmap-1552" to be "Succeeded or Failed" Nov 18 07:56:58.608: INFO: Pod "pod-configmaps-75258fb3-8eb1-483d-a631-6d9975f321ff": Phase="Pending", Reason="", readiness=false. Elapsed: 42.082629ms Nov 18 07:57:00.722: INFO: Pod "pod-configmaps-75258fb3-8eb1-483d-a631-6d9975f321ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.156373533s Nov 18 07:57:02.729: INFO: Pod "pod-configmaps-75258fb3-8eb1-483d-a631-6d9975f321ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.163210426s STEP: Saw pod success Nov 18 07:57:02.729: INFO: Pod "pod-configmaps-75258fb3-8eb1-483d-a631-6d9975f321ff" satisfied condition "Succeeded or Failed" Nov 18 07:57:02.756: INFO: Trying to get logs from node leguer-worker pod pod-configmaps-75258fb3-8eb1-483d-a631-6d9975f321ff container env-test: STEP: delete the pod Nov 18 07:57:02.778: INFO: Waiting for pod pod-configmaps-75258fb3-8eb1-483d-a631-6d9975f321ff to disappear Nov 18 07:57:02.782: INFO: Pod pod-configmaps-75258fb3-8eb1-483d-a631-6d9975f321ff no longer exists [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:57:02.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1552" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":303,"completed":274,"skipped":4528,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:57:02.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 07:57:09.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7895" for this suite. • [SLOW TEST:7.145 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":303,"completed":275,"skipped":4544,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 07:57:09.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Nov 18 07:57:10.037: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 18 07:57:10.054: INFO: Waiting for terminating namespaces to be deleted... Nov 18 07:57:10.058: INFO: Logging pods the apiserver thinks is on node leguer-worker before test Nov 18 07:57:10.066: INFO: kindnet-lc95n from kube-system started at 2020-10-04 09:51:31 +0000 UTC (1 container statuses recorded) Nov 18 07:57:10.066: INFO: Container kindnet-cni ready: true, restart count 1 Nov 18 07:57:10.066: INFO: kube-proxy-bmzvg from kube-system started at 2020-10-04 09:51:31 +0000 UTC (1 container statuses recorded) Nov 18 07:57:10.066: INFO: Container kube-proxy ready: true, restart count 0 Nov 18 07:57:10.066: INFO: Logging pods the apiserver thinks is on node leguer-worker2 before test Nov 18 07:57:10.075: INFO: kindnet-nffr7 from kube-system started at 2020-10-04 09:51:31 +0000 UTC (1 container statuses recorded) Nov 18 07:57:10.075: INFO: Container kindnet-cni ready: true, restart count 1 Nov 18 07:57:10.075: INFO: kube-proxy-sxhc5 from kube-system started at 2020-10-04 09:51:30 +0000 UTC (1 container statuses recorded) Nov 18 07:57:10.075: INFO: Container kube-proxy ready: true, restart count 0 Nov 18 07:57:10.075: INFO: busybox-host-aliasese8e3a633-0660-49a4-9511-a2ca52c25a1f from kubelet-test-8449 started at 2020-11-18 07:56:54 +0000 UTC (1 container statuses recorded) Nov 18 07:57:10.075: INFO: Container busybox-host-aliasese8e3a633-0660-49a4-9511-a2ca52c25a1f ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-adde0007-d1c9-4477-bc3b-9e698e30497e 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-adde0007-d1c9-4477-bc3b-9e698e30497e off the node leguer-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-adde0007-d1c9-4477-bc3b-9e698e30497e [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 08:02:20.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-445" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:310.387 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":303,"completed":276,"skipped":4566,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 08:02:20.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 08:03:20.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7138" for this suite. • [SLOW TEST:60.142 seconds] [k8s.io] Probing container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":303,"completed":277,"skipped":4586,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 08:03:20.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Nov 18 08:03:21.810: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Nov 18 08:03:23.831: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741283401, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741283401, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741283401, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741283401, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Nov 18 08:03:26.978: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 08:03:28.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1310" for this suite. STEP: Destroying namespace "webhook-1310-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.364 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":303,"completed":278,"skipped":4593,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 08:03:28.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Nov 18 08:03:28.915: INFO: Waiting up to 5m0s for pod "downwardapi-volume-57671796-033e-4f1d-832f-35115294f1ed" in namespace "projected-7423" to be "Succeeded or Failed" Nov 18 08:03:28.930: INFO: Pod "downwardapi-volume-57671796-033e-4f1d-832f-35115294f1ed": Phase="Pending", Reason="", readiness=false. Elapsed: 14.785356ms Nov 18 08:03:31.002: INFO: Pod "downwardapi-volume-57671796-033e-4f1d-832f-35115294f1ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086942889s Nov 18 08:03:33.010: INFO: Pod "downwardapi-volume-57671796-033e-4f1d-832f-35115294f1ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.095177159s STEP: Saw pod success Nov 18 08:03:33.010: INFO: Pod "downwardapi-volume-57671796-033e-4f1d-832f-35115294f1ed" satisfied condition "Succeeded or Failed" Nov 18 08:03:33.015: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-57671796-033e-4f1d-832f-35115294f1ed container client-container: STEP: delete the pod Nov 18 08:03:33.068: INFO: Waiting for pod downwardapi-volume-57671796-033e-4f1d-832f-35115294f1ed to disappear Nov 18 08:03:33.073: INFO: Pod downwardapi-volume-57671796-033e-4f1d-832f-35115294f1ed no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 08:03:33.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7423" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":279,"skipped":4595,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 08:03:33.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-2334 STEP: creating service affinity-clusterip-transition in namespace services-2334 STEP: creating replication controller affinity-clusterip-transition in namespace services-2334 I1118 08:03:33.202190 10 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-2334, replica count: 3 I1118 08:03:36.253619 10 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1118 08:03:39.254280 10 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 18 08:03:39.262: INFO: Creating new exec pod Nov 18 08:03:44.302: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-2334 execpod-affinity2dd7m -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Nov 18 08:03:46.014: INFO: stderr: "I1118 08:03:45.871701 4085 log.go:181] (0x4000c9a000) (0x4000732140) Create stream\nI1118 08:03:45.876419 4085 log.go:181] (0x4000c9a000) (0x4000732140) Stream added, broadcasting: 1\nI1118 08:03:45.894367 4085 log.go:181] (0x4000c9a000) Reply frame received for 1\nI1118 08:03:45.895774 4085 log.go:181] (0x4000c9a000) (0x40008bd180) Create stream\nI1118 08:03:45.895845 4085 log.go:181] (0x4000c9a000) (0x40008bd180) Stream added, broadcasting: 3\nI1118 08:03:45.897251 4085 log.go:181] (0x4000c9a000) Reply frame received for 3\nI1118 08:03:45.897478 4085 log.go:181] (0x4000c9a000) (0x40008bd400) Create stream\nI1118 08:03:45.897539 4085 log.go:181] (0x4000c9a000) (0x40008bd400) Stream added, broadcasting: 5\nI1118 08:03:45.898760 4085 log.go:181] (0x4000c9a000) Reply frame received for 5\nI1118 08:03:45.994494 4085 log.go:181] (0x4000c9a000) Data frame received for 5\nI1118 08:03:45.994686 4085 log.go:181] (0x40008bd400) (5) Data frame handling\nI1118 08:03:45.995079 4085 log.go:181] (0x40008bd400) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip-transition 80\nI1118 08:03:45.996284 4085 log.go:181] (0x4000c9a000) Data frame received for 5\nI1118 08:03:45.996506 4085 log.go:181] (0x40008bd400) (5) Data frame handling\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\nI1118 08:03:45.996648 4085 log.go:181] (0x4000c9a000) Data frame received for 3\nI1118 08:03:45.996795 4085 log.go:181] (0x40008bd180) (3) Data frame handling\nI1118 08:03:45.997129 4085 log.go:181] (0x40008bd400) (5) Data frame sent\nI1118 08:03:45.997253 4085 log.go:181] (0x4000c9a000) Data frame received for 5\nI1118 08:03:45.997359 4085 log.go:181] (0x40008bd400) (5) Data frame handling\nI1118 08:03:45.998156 4085 log.go:181] (0x4000c9a000) Data frame received for 1\nI1118 08:03:45.998243 4085 log.go:181] (0x4000732140) (1) Data frame handling\nI1118 08:03:45.998332 4085 log.go:181] (0x4000732140) (1) Data frame sent\nI1118 08:03:45.999581 4085 log.go:181] (0x4000c9a000) (0x4000732140) Stream removed, broadcasting: 1\nI1118 08:03:46.001855 4085 log.go:181] (0x4000c9a000) Go away received\nI1118 08:03:46.004457 4085 log.go:181] (0x4000c9a000) (0x4000732140) Stream removed, broadcasting: 1\nI1118 08:03:46.004991 4085 log.go:181] (0x4000c9a000) (0x40008bd180) Stream removed, broadcasting: 3\nI1118 08:03:46.005213 4085 log.go:181] (0x4000c9a000) (0x40008bd400) Stream removed, broadcasting: 5\n" Nov 18 08:03:46.015: INFO: stdout: "" Nov 18 08:03:46.020: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-2334 execpod-affinity2dd7m -- /bin/sh -x -c nc -zv -t -w 2 10.108.16.241 80' Nov 18 08:03:47.626: INFO: stderr: "I1118 08:03:47.460674 4105 log.go:181] (0x400020c0b0) (0x40003cc280) Create stream\nI1118 08:03:47.465127 4105 log.go:181] (0x400020c0b0) (0x40003cc280) Stream added, broadcasting: 1\nI1118 08:03:47.479312 4105 log.go:181] (0x400020c0b0) Reply frame received for 1\nI1118 08:03:47.479844 4105 log.go:181] (0x400020c0b0) (0x400063e000) Create stream\nI1118 08:03:47.479920 4105 log.go:181] (0x400020c0b0) (0x400063e000) Stream added, broadcasting: 3\nI1118 08:03:47.481128 4105 log.go:181] (0x400020c0b0) Reply frame received for 3\nI1118 08:03:47.481324 4105 log.go:181] (0x400020c0b0) (0x4000552780) Create stream\nI1118 08:03:47.481373 4105 log.go:181] (0x400020c0b0) (0x4000552780) Stream added, broadcasting: 5\nI1118 08:03:47.482229 4105 log.go:181] (0x400020c0b0) Reply frame received for 5\nI1118 08:03:47.569167 4105 log.go:181] (0x400020c0b0) Data frame received for 3\nI1118 08:03:47.569581 4105 log.go:181] (0x400063e000) (3) Data frame handling\nI1118 08:03:47.569823 4105 log.go:181] (0x400020c0b0) Data frame received for 5\nI1118 08:03:47.569962 4105 log.go:181] (0x4000552780) (5) Data frame handling\nI1118 08:03:47.570360 4105 log.go:181] (0x400020c0b0) Data frame received for 1\nI1118 08:03:47.570579 4105 log.go:181] (0x40003cc280) (1) Data frame handling\nI1118 08:03:47.572997 4105 log.go:181] (0x40003cc280) (1) Data frame sent\nI1118 08:03:47.573444 4105 log.go:181] (0x4000552780) (5) Data frame sent\nI1118 08:03:47.573628 4105 log.go:181] (0x400020c0b0) Data frame received for 5\n+ nc -zv -t -w 2 10.108.16.241 80\nConnection to 10.108.16.241 80 port [tcp/http] succeeded!\nI1118 08:03:47.574437 4105 log.go:181] (0x400020c0b0) (0x40003cc280) Stream removed, broadcasting: 1\nI1118 08:03:47.574989 4105 log.go:181] (0x4000552780) (5) Data frame handling\nI1118 08:03:47.615049 4105 log.go:181] (0x400020c0b0) Go away received\nI1118 08:03:47.615968 4105 log.go:181] (0x400020c0b0) (0x40003cc280) Stream removed, broadcasting: 1\nI1118 08:03:47.616446 4105 log.go:181] (0x400020c0b0) (0x400063e000) Stream removed, broadcasting: 3\nI1118 08:03:47.616744 4105 log.go:181] (0x400020c0b0) (0x4000552780) Stream removed, broadcasting: 5\n" Nov 18 08:03:47.627: INFO: stdout: "" Nov 18 08:03:47.643: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-2334 execpod-affinity2dd7m -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.108.16.241:80/ ; done' Nov 18 08:03:49.413: INFO: stderr: "I1118 08:03:49.173159 4126 log.go:181] (0x40001bc8f0) (0x40008aa3c0) Create stream\nI1118 08:03:49.175858 4126 log.go:181] (0x40001bc8f0) (0x40008aa3c0) Stream added, broadcasting: 1\nI1118 08:03:49.185243 4126 log.go:181] (0x40001bc8f0) Reply frame received for 1\nI1118 08:03:49.185909 4126 log.go:181] (0x40001bc8f0) (0x4000abc000) Create stream\nI1118 08:03:49.185977 4126 log.go:181] (0x40001bc8f0) (0x4000abc000) Stream added, broadcasting: 3\nI1118 08:03:49.187639 4126 log.go:181] (0x40001bc8f0) Reply frame received for 3\nI1118 08:03:49.188038 4126 log.go:181] (0x40001bc8f0) (0x4000abc0a0) Create stream\nI1118 08:03:49.188124 4126 log.go:181] (0x40001bc8f0) (0x4000abc0a0) Stream added, broadcasting: 5\nI1118 08:03:49.189576 4126 log.go:181] (0x40001bc8f0) Reply frame received for 5\nI1118 08:03:49.295658 4126 log.go:181] (0x40001bc8f0) Data frame received for 3\nI1118 08:03:49.296117 4126 log.go:181] (0x40001bc8f0) Data frame received for 5\nI1118 08:03:49.296262 4126 log.go:181] (0x4000abc0a0) (5) Data frame handling\nI1118 08:03:49.296369 4126 log.go:181] (0x4000abc000) (3) Data frame handling\nI1118 08:03:49.297521 4126 log.go:181] (0x4000abc000) (3) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.16.241:80/\nI1118 08:03:49.298342 4126 log.go:181] (0x4000abc0a0) (5) Data frame sent\nI1118 08:03:49.300020 4126 log.go:181] (0x40001bc8f0) Data frame received for 3\nI1118 08:03:49.300130 4126 log.go:181] (0x4000abc000) (3) Data frame handling\nI1118 08:03:49.300243 4126 log.go:181] (0x4000abc000) (3) Data frame sent\nI1118 08:03:49.300816 4126 log.go:181] (0x40001bc8f0) Data frame received for 3\nI1118 08:03:49.301010 4126 log.go:181] (0x4000abc000) (3) Data frame handling\nI1118 08:03:49.301111 4126 log.go:181] (0x40001bc8f0) Data frame received for 5\nI1118 08:03:49.301227 4126 log.go:181] (0x4000abc0a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.16.241:80/\nI1118 08:03:49.301331 4126 log.go:181] (0x4000abc000) (3) Data frame sent\nI1118 08:03:49.301440 4126 log.go:181] (0x4000abc0a0) (5) Data frame sent\nI1118 08:03:49.306896 4126 log.go:181] (0x40001bc8f0) Data frame received for 3\nI1118 08:03:49.306968 4126 log.go:181] (0x4000abc000) (3) Data frame handling\nI1118 08:03:49.307036 4126 log.go:181] (0x4000abc000) (3) Data frame sent\nI1118 08:03:49.307994 4126 log.go:181] (0x40001bc8f0) Data frame received for 5\nI1118 08:03:49.308082 4126 log.go:181] (0x4000abc0a0) (5) Data frame handling\n+ echo\n+ curl -qI1118 08:03:49.308153 4126 log.go:181] (0x40001bc8f0) Data frame received for 3\nI1118 08:03:49.308272 4126 log.go:181] (0x4000abc000) (3) Data frame handling\nI1118 08:03:49.308357 4126 log.go:181] (0x4000abc000) (3) Data frame sent\nI1118 08:03:49.308436 4126 log.go:181] (0x4000abc0a0) (5) Data frame sent\nI1118 08:03:49.308504 4126 log.go:181] (0x40001bc8f0) Data frame received for 5\nI1118 08:03:49.308565 4126 log.go:181] (0x4000abc0a0) (5) Data frame handling\nI1118 08:03:49.308663 4126 log.go:181] (0x4000abc0a0) (5) Data frame sent\n -s --connect-timeout 2 http://10.108.16.241:80/\nI1118 08:03:49.313134 4126 log.go:181] (0x40001bc8f0) Data frame received for 3\nI1118 08:03:49.313295 4126 log.go:181] (0x4000abc000) (3) Data frame handling\nI1118 08:03:49.313479 4126 log.go:181] (0x4000abc000) (3) Data frame sent\nI1118 08:03:49.313595 4126 log.go:181] (0x40001bc8f0) Data frame received for 5\nI1118 08:03:49.313688 4126 log.go:181] (0x4000abc0a0) (5) Data frame handling\nI1118 08:03:49.313773 4126 log.go:181] (0x4000abc0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.16.241:80/\nI1118 08:03:49.313854 4126 log.go:181] (0x40001bc8f0) Data frame received for 3\nI1118 08:03:49.313929 4126 log.go:181] (0x4000abc000) (3) Data frame handling\nI1118 08:03:49.314016 4126 log.go:181] (0x4000abc000) (3) Data frame sent\nI1118 08:03:49.318180 4126 log.go:181] (0x40001bc8f0) Data frame received for 3\nI1118 08:03:49.318287 4126 log.go:181] (0x4000abc000) (3) Data frame handling\nI1118 08:03:49.318407 4126 log.go:181] (0x4000abc000) (3) Data frame sent\nI1118 08:03:49.318673 4126 log.go:181] (0x40001bc8f0) Data frame received for 5\nI1118 08:03:49.318780 4126 log.go:181] (0x4000abc0a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.16.241:80/\nI1118 08:03:49.318899 4126 log.go:181] (0x40001bc8f0) Data frame received for 3\nI1118 08:03:49.319049 4126 log.go:181] (0x4000abc000) (3) Data frame handling\nI1118 08:03:49.319137 4126 log.go:181] (0x4000abc0a0) (5) Data frame sent\nI1118 08:03:49.319212 4126 log.go:181] (0x4000abc000) (3) Data frame sent\nI1118 08:03:49.325440 4126 log.go:181] (0x40001bc8f0) Data frame received for 3\nI1118 08:03:49.325554 4126 log.go:181] (0x4000abc000) (3) Data frame handling\nI1118 08:03:49.325682 4126 log.go:181] (0x4000abc000) (3) Data frame sent\nI1118 08:03:49.326537 4126 log.go:181] (0x40001bc8f0) Data frame received for 5\nI1118 08:03:49.326616 4126 log.go:181] (0x4000abc0a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.16.241:80/\nI1118 08:03:49.326718 4126 log.go:181] (0x40001bc8f0) Data frame received for 3\nI1118 08:03:49.326837 4126 log.go:181] (0x4000abc000) (3) Data frame handling\nI1118 08:03:49.326955 4126 log.go:181] (0x4000abc0a0) (5) Data frame sent\nI1118 08:03:49.327077 4126 log.go:181] (0x4000abc000) (3) Data frame sent\nI1118 08:03:49.331420 4126 log.go:181] (0x40001bc8f0) Data frame received for 3\nI1118 08:03:49.331580 4126 log.go:181] (0x4000abc000) (3) Data frame handling\nI1118 08:03:49.331731 4126 log.go:181] (0x4000abc000) (3) Data frame sent\nI1118 08:03:49.332238 4126 log.go:181] (0x40001bc8f0) Data frame received for 5\nI1118 08:03:49.332348 4126 log.go:181] (0x40001bc8f0) Data frame received for 3\nI1118 08:03:49.332495 4126 log.go:181] (0x4000abc000) (3) Data frame handling\nI1118 08:03:49.332631 4126 log.go:181] (0x4000abc0a0) (5) Data frame handling\nI1118 08:03:49.332757 4126 log.go:181] (0x4000abc0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.16.241:80/\nI1118 08:03:49.332994 4126 log.go:181] (0x4000abc000) (3) Data frame sent\nI1118 08:03:49.336004 4126 log.go:181] (0x40001bc8f0) Data frame received for 3\nI1118 08:03:49.336126 4126 log.go:181] (0x4000abc000) (3) Data frame handling\nI1118 08:03:49.336267 4126 log.go:181] (0x4000abc000) (3) Data frame sent\nI1118 08:03:49.337029 4126 log.go:181] (0x40001bc8f0) Data frame received for 5\nI1118 08:03:49.337173 4126 log.go:181] (0x4000abc0a0) (5) Data frame handling\nI1118 08:03:49.337305 4126 log.go:181] (0x4000abc0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.16.241:80/\nI1118 08:03:49.337413 4126 log.go:181] (0x40001bc8f0) Data frame received for 3\nI1118 08:03:49.337524 4126 log.go:181] (0x4000abc000) (3) Data frame handling\nI1118 08:03:49.337652 4126 log.go:181] (0x4000abc000) (3) Data frame sent\nI1118 08:03:49.342322 4126 log.go:181] (0x40001bc8f0) Data frame received for 3\nI1118 08:03:49.342409 4126 log.go:181] (0x4000abc000) (3) Data frame handling\nI1118 08:03:49.342505 4126 log.go:181] (0x4000abc000) (3) Data frame sent\nI1118 08:03:49.343122 4126 log.go:181] (0x40001bc8f0) Data frame received for 5\nI1118 08:03:49.343225 4126 log.go:181] (0x4000abc0a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.16.241:80/\nI1118 08:03:49.343308 4126 log.go:181] (0x40001bc8f0) Data frame received for 3\nI1118 08:03:49.343390 4126 log.go:181] (0x4000abc000) (3) Data frame handling\nI1118 08:03:49.343493 4126 log.go:181] (0x4000abc0a0) (5) Data frame sent\nI1118 08:03:49.343599 4126 log.go:181] (0x4000abc000) (3) Data frame sent\nI1118 08:03:49.347714 4126 log.go:181] (0x40001bc8f0) Data frame received for 3\nI1118 08:03:49.347808 4126 log.go:181] (0x4000abc000) (3) Data frame handling\nI1118 08:03:49.347952 4126 log.go:181] (0x4000abc000) (3) Data frame sent\nI1118 08:03:49.348444 4126 log.go:181] (0x40001bc8f0) Data frame received for 5\nI1118 08:03:49.348571 4126 log.go:181] (0x4000abc0a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.16.241:80/\nI1118 08:03:49.348674 4126 log.go:181] (0x40001bc8f0) Data frame received for 3\nI1118 08:03:49.348794 4126 log.go:181] (0x4000abc000) (3) Data frame handling\nI1118 08:03:49.349015 4126 log.go:181] (0x4000abc000) (3) Data frame sent\nI1118 08:03:49.349136 4126 log.go:181] (0x4000abc0a0) (5) Data frame sent\nI1118 08:03:49.353006 4126 log.go:181] (0x40001bc8f0) Data frame received for 3\nI1118 08:03:49.353140 4126 log.go:181] (0x4000abc000) (3) Data frame handling\nI1118 08:03:49.353278 4126 log.go:181] (0x4000abc000) (3) Data frame sent\nI1118 08:03:49.354118 4126 log.go:181] (0x40001bc8f0) Data frame received for 5\nI1118 08:03:49.354264 4126 log.go:181] (0x4000abc0a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.16.241:80/\nI1118 08:03:49.354391 4126 log.go:181] (0x40001bc8f0) Data frame received for 3\nI1118 08:03:49.354509 4126 log.go:181] (0x4000abc000) (3) Data frame handling\nI1118 08:03:49.354585 4126 log.go:181] (0x4000abc0a0) (5) Data frame sent\nI1118 08:03:49.354672 4126 log.go:181] (0x4000abc000) (3) Data frame sent\nI1118 08:03:49.358394 4126 log.go:181] (0x40001bc8f0) Data frame received for 3\nI1118 08:03:49.358547 4126 log.go:181] (0x4000abc000) (3) Data frame handling\nI1118 08:03:49.358699 4126 log.go:181] (0x4000abc000) (3) Data frame sent\nI1118 08:03:49.358900 4126 log.go:181] (0x40001bc8f0) Data frame received for 5\nI1118 08:03:49.359028 4126 log.go:181] (0x4000abc0a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.16.241:80/\nI1118 08:03:49.359113 4126 log.go:181] (0x40001bc8f0) Data frame received for 3\nI1118 08:03:49.359218 4126 log.go:181] (0x4000abc000) (3) Data frame handling\nI1118 08:03:49.359319 4126 log.go:181] (0x4000abc000) (3) Data frame sent\nI1118 08:03:49.359424 4126 log.go:181] (0x4000abc0a0) (5) Data frame sent\nI1118 08:03:49.363333 4126 log.go:181] (0x40001bc8f0) Data frame received for 3\nI1118 08:03:49.363435 4126 log.go:181] (0x4000abc000) (3) Data frame handling\nI1118 08:03:49.363584 4126 log.go:181] (0x4000abc000) (3) Data frame sent\nI1118 08:03:49.364121 4126 log.go:181] (0x40001bc8f0) Data frame received for 3\nI1118 08:03:49.364253 4126 log.go:181] (0x4000abc000) (3) Data frame handling\nI1118 08:03:49.364392 4126 log.go:181] (0x4000abc000) (3) Data frame sent\nI1118 08:03:49.364507 4126 log.go:181] (0x40001bc8f0) Data frame received for 5\nI1118 08:03:49.364643 4126 log.go:181] (0x4000abc0a0) (5) Data frame handling\nI1118 08:03:49.364787 4126 log.go:181] (0x4000abc0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.16.241:80/\nI1118 08:03:49.367327 4126 log.go:181] (0x40001bc8f0) Data frame received for 3\nI1118 08:03:49.367473 4126 log.go:181] (0x4000abc000) (3) Data frame handling\nI1118 08:03:49.367688 4126 log.go:181] (0x4000abc000) (3) Data frame sent\nI1118 08:03:49.367855 4126 log.go:181] (0x40001bc8f0) Data frame received for 5\nI1118 08:03:49.368010 4126 log.go:181] (0x4000abc0a0) (5) Data frame handling\nI1118 08:03:49.368204 4126 log.go:181] (0x4000abc0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.16.241:80/\nI1118 08:03:49.368362 4126 log.go:181] (0x40001bc8f0) Data frame received for 3\nI1118 08:03:49.368510 4126 log.go:181] (0x4000abc000) (3) Data frame handling\nI1118 08:03:49.368679 4126 log.go:181] (0x4000abc000) (3) Data frame sent\nI1118 08:03:49.375372 4126 log.go:181] (0x40001bc8f0) Data frame received for 3\nI1118 08:03:49.375530 4126 log.go:181] (0x4000abc000) (3) Data frame handling\nI1118 08:03:49.375735 4126 log.go:181] (0x4000abc000) (3) Data frame sent\nI1118 08:03:49.376229 4126 log.go:181] (0x40001bc8f0) Data frame received for 5\nI1118 08:03:49.376358 4126 log.go:181] (0x4000abc0a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.16.241:80/\nI1118 08:03:49.376481 4126 log.go:181] (0x40001bc8f0) Data frame received for 3\nI1118 08:03:49.376636 4126 log.go:181] (0x4000abc000) (3) Data frame handling\nI1118 08:03:49.376794 4126 log.go:181] (0x4000abc000) (3) Data frame sent\nI1118 08:03:49.377063 4126 log.go:181] (0x4000abc0a0) (5) Data frame sent\nI1118 08:03:49.382263 4126 log.go:181] (0x40001bc8f0) Data frame received for 3\nI1118 08:03:49.382423 4126 log.go:181] (0x4000abc000) (3) Data frame handling\nI1118 08:03:49.382612 4126 log.go:181] (0x4000abc000) (3) Data frame sent\nI1118 08:03:49.383144 4126 log.go:181] (0x40001bc8f0) Data frame received for 5\nI1118 08:03:49.383299 4126 log.go:181] (0x4000abc0a0) (5) Data frame handling\nI1118 08:03:49.383399 4126 log.go:181] (0x4000abc0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.16.241:80/\nI1118 08:03:49.383485 4126 log.go:181] (0x40001bc8f0) Data frame received for 3\nI1118 08:03:49.383564 4126 log.go:181] (0x4000abc000) (3) Data frame handling\nI1118 08:03:49.383683 4126 log.go:181] (0x4000abc000) (3) Data frame sent\nI1118 08:03:49.389977 4126 log.go:181] (0x40001bc8f0) Data frame received for 3\nI1118 08:03:49.390139 4126 log.go:181] (0x4000abc000) (3) Data frame handling\nI1118 08:03:49.390290 4126 log.go:181] (0x4000abc000) (3) Data frame sent\nI1118 08:03:49.391075 4126 log.go:181] (0x40001bc8f0) Data frame received for 5\nI1118 08:03:49.391241 4126 log.go:181] (0x4000abc0a0) (5) Data frame handling\nI1118 08:03:49.391347 4126 log.go:181] (0x40001bc8f0) Data frame received for 3\nI1118 08:03:49.391466 4126 log.go:181] (0x4000abc000) (3) Data frame handling\nI1118 08:03:49.393152 4126 log.go:181] (0x40001bc8f0) Data frame received for 1\nI1118 08:03:49.393256 4126 log.go:181] (0x40008aa3c0) (1) Data frame handling\nI1118 08:03:49.393405 4126 log.go:181] (0x40008aa3c0) (1) Data frame sent\nI1118 08:03:49.394765 4126 log.go:181] (0x40001bc8f0) (0x40008aa3c0) Stream removed, broadcasting: 1\nI1118 08:03:49.398159 4126 log.go:181] (0x40001bc8f0) Go away received\nI1118 08:03:49.402179 4126 log.go:181] (0x40001bc8f0) (0x40008aa3c0) Stream removed, broadcasting: 1\nI1118 08:03:49.402640 4126 log.go:181] (0x40001bc8f0) (0x4000abc000) Stream removed, broadcasting: 3\nI1118 08:03:49.402959 4126 log.go:181] (0x40001bc8f0) (0x4000abc0a0) Stream removed, broadcasting: 5\n" Nov 18 08:03:49.419: INFO: stdout: "\naffinity-clusterip-transition-t8f8w\naffinity-clusterip-transition-t8f8w\naffinity-clusterip-transition-hch8v\naffinity-clusterip-transition-hch8v\naffinity-clusterip-transition-t8f8w\naffinity-clusterip-transition-vhlkm\naffinity-clusterip-transition-hch8v\naffinity-clusterip-transition-hch8v\naffinity-clusterip-transition-vhlkm\naffinity-clusterip-transition-vhlkm\naffinity-clusterip-transition-hch8v\naffinity-clusterip-transition-vhlkm\naffinity-clusterip-transition-hch8v\naffinity-clusterip-transition-t8f8w\naffinity-clusterip-transition-hch8v\naffinity-clusterip-transition-vhlkm" Nov 18 08:03:49.419: INFO: Received response from host: affinity-clusterip-transition-t8f8w Nov 18 08:03:49.419: INFO: Received response from host: affinity-clusterip-transition-t8f8w Nov 18 08:03:49.419: INFO: Received response from host: affinity-clusterip-transition-hch8v Nov 18 08:03:49.419: INFO: Received response from host: affinity-clusterip-transition-hch8v Nov 18 08:03:49.419: INFO: Received response from host: affinity-clusterip-transition-t8f8w Nov 18 08:03:49.419: INFO: Received response from host: affinity-clusterip-transition-vhlkm Nov 18 08:03:49.419: INFO: Received response from host: affinity-clusterip-transition-hch8v Nov 18 08:03:49.419: INFO: Received response from host: affinity-clusterip-transition-hch8v Nov 18 08:03:49.419: INFO: Received response from host: affinity-clusterip-transition-vhlkm Nov 18 08:03:49.419: INFO: Received response from host: affinity-clusterip-transition-vhlkm Nov 18 08:03:49.419: INFO: Received response from host: affinity-clusterip-transition-hch8v Nov 18 08:03:49.419: INFO: Received response from host: affinity-clusterip-transition-vhlkm Nov 18 08:03:49.419: INFO: Received response from host: affinity-clusterip-transition-hch8v Nov 18 08:03:49.419: INFO: Received response from host: affinity-clusterip-transition-t8f8w Nov 18 08:03:49.419: INFO: Received response from host: affinity-clusterip-transition-hch8v Nov 18 08:03:49.419: INFO: Received response from host: affinity-clusterip-transition-vhlkm Nov 18 08:03:49.432: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-2334 execpod-affinity2dd7m -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.108.16.241:80/ ; done' Nov 18 08:03:51.308: INFO: stderr: "I1118 08:03:51.073047 4146 log.go:181] (0x40000c0a50) (0x4000c7e460) Create stream\nI1118 08:03:51.077602 4146 log.go:181] (0x40000c0a50) (0x4000c7e460) Stream added, broadcasting: 1\nI1118 08:03:51.090926 4146 log.go:181] (0x40000c0a50) Reply frame received for 1\nI1118 08:03:51.092348 4146 log.go:181] (0x40000c0a50) (0x40004be0a0) Create stream\nI1118 08:03:51.092477 4146 log.go:181] (0x40000c0a50) (0x40004be0a0) Stream added, broadcasting: 3\nI1118 08:03:51.094375 4146 log.go:181] (0x40000c0a50) Reply frame received for 3\nI1118 08:03:51.094864 4146 log.go:181] (0x40000c0a50) (0x40005080a0) Create stream\nI1118 08:03:51.094979 4146 log.go:181] (0x40000c0a50) (0x40005080a0) Stream added, broadcasting: 5\nI1118 08:03:51.096651 4146 log.go:181] (0x40000c0a50) Reply frame received for 5\nI1118 08:03:51.182939 4146 log.go:181] (0x40000c0a50) Data frame received for 3\nI1118 08:03:51.183464 4146 log.go:181] (0x40000c0a50) Data frame received for 5\nI1118 08:03:51.183674 4146 log.go:181] (0x40005080a0) (5) Data frame handling\nI1118 08:03:51.183934 4146 log.go:181] (0x40004be0a0) (3) Data frame handling\nI1118 08:03:51.185032 4146 log.go:181] (0x40004be0a0) (3) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.16.241:80/\nI1118 08:03:51.186330 4146 log.go:181] (0x40005080a0) (5) Data frame sent\nI1118 08:03:51.189199 4146 log.go:181] (0x40000c0a50) Data frame received for 3\nI1118 08:03:51.189338 4146 log.go:181] (0x40004be0a0) (3) Data frame handling\nI1118 08:03:51.189544 4146 log.go:181] (0x40004be0a0) (3) Data frame sent\nI1118 08:03:51.190241 4146 log.go:181] (0x40000c0a50) Data frame received for 3\nI1118 08:03:51.190342 4146 log.go:181] (0x40004be0a0) (3) Data frame handling\nI1118 08:03:51.190417 4146 log.go:181] (0x40004be0a0) (3) Data frame sent\nI1118 08:03:51.190484 4146 log.go:181] (0x40000c0a50) Data frame received for 5\nI1118 08:03:51.190543 4146 log.go:181] (0x40005080a0) (5) Data frame handling\nI1118 08:03:51.190615 4146 log.go:181] (0x40005080a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.16.241:80/\nI1118 08:03:51.197569 4146 log.go:181] (0x40000c0a50) Data frame received for 3\nI1118 08:03:51.197694 4146 log.go:181] (0x40004be0a0) (3) Data frame handling\nI1118 08:03:51.197860 4146 log.go:181] (0x40004be0a0) (3) Data frame sent\nI1118 08:03:51.198495 4146 log.go:181] (0x40000c0a50) Data frame received for 3\nI1118 08:03:51.198606 4146 log.go:181] (0x40004be0a0) (3) Data frame handling\nI1118 08:03:51.198721 4146 log.go:181] (0x40000c0a50) Data frame received for 5\nI1118 08:03:51.198885 4146 log.go:181] (0x40005080a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.16.241:80/\nI1118 08:03:51.198987 4146 log.go:181] (0x40004be0a0) (3) Data frame sent\nI1118 08:03:51.199098 4146 log.go:181] (0x40005080a0) (5) Data frame sent\nI1118 08:03:51.201883 4146 log.go:181] (0x40000c0a50) Data frame received for 3\nI1118 08:03:51.201985 4146 log.go:181] (0x40004be0a0) (3) Data frame handling\nI1118 08:03:51.202102 4146 log.go:181] (0x40004be0a0) (3) Data frame sent\nI1118 08:03:51.202614 4146 log.go:181] (0x40000c0a50) Data frame received for 3\nI1118 08:03:51.202734 4146 log.go:181] (0x40004be0a0) (3) Data frame handling\nI1118 08:03:51.202835 4146 log.go:181] (0x40000c0a50) Data frame received for 5\nI1118 08:03:51.202952 4146 log.go:181] (0x40005080a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.16.241:80/\nI1118 08:03:51.203053 4146 log.go:181] (0x40004be0a0) (3) Data frame sent\nI1118 08:03:51.203164 4146 log.go:181] (0x40005080a0) (5) Data frame sent\nI1118 08:03:51.210805 4146 log.go:181] (0x40000c0a50) Data frame received for 3\nI1118 08:03:51.210898 4146 log.go:181] (0x40004be0a0) (3) Data frame handling\nI1118 08:03:51.211020 4146 log.go:181] (0x40004be0a0) (3) Data frame sent\nI1118 08:03:51.211689 4146 log.go:181] (0x40000c0a50) Data frame received for 3\nI1118 08:03:51.211804 4146 log.go:181] (0x40004be0a0) (3) Data frame handling\nI1118 08:03:51.211886 4146 log.go:181] (0x40004be0a0) (3) Data frame sent\nI1118 08:03:51.211958 4146 log.go:181] (0x40000c0a50) Data frame received for 5\nI1118 08:03:51.212024 4146 log.go:181] (0x40005080a0) (5) Data frame handling\nI1118 08:03:51.212115 4146 log.go:181] (0x40005080a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.16.241:80/\nI1118 08:03:51.219341 4146 log.go:181] (0x40000c0a50) Data frame received for 3\nI1118 08:03:51.219478 4146 log.go:181] (0x40004be0a0) (3) Data frame handling\nI1118 08:03:51.219639 4146 log.go:181] (0x40004be0a0) (3) Data frame sent\nI1118 08:03:51.219977 4146 log.go:181] (0x40000c0a50) Data frame received for 3\nI1118 08:03:51.220119 4146 log.go:181] (0x40004be0a0) (3) Data frame handling\nI1118 08:03:51.220223 4146 log.go:181] (0x40000c0a50) Data frame received for 5\nI1118 08:03:51.220337 4146 log.go:181] (0x40005080a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.16.241:80/\nI1118 08:03:51.220432 4146 log.go:181] (0x40004be0a0) (3) Data frame sent\nI1118 08:03:51.220548 4146 log.go:181] (0x40005080a0) (5) Data frame sent\nI1118 08:03:51.226787 4146 log.go:181] (0x40000c0a50) Data frame received for 3\nI1118 08:03:51.226926 4146 log.go:181] (0x40004be0a0) (3) Data frame handling\nI1118 08:03:51.227049 4146 log.go:181] (0x40004be0a0) (3) Data frame sent\nI1118 08:03:51.227658 4146 log.go:181] (0x40000c0a50) Data frame received for 5\nI1118 08:03:51.227805 4146 log.go:181] (0x40005080a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.16.241:80/\nI1118 08:03:51.227935 4146 log.go:181] (0x40000c0a50) Data frame received for 3\nI1118 08:03:51.228092 4146 log.go:181] (0x40004be0a0) (3) Data frame handling\nI1118 08:03:51.228227 4146 log.go:181] (0x40005080a0) (5) Data frame sent\nI1118 08:03:51.228378 4146 log.go:181] (0x40004be0a0) (3) Data frame sent\nI1118 08:03:51.231228 4146 log.go:181] (0x40000c0a50) Data frame received for 3\nI1118 08:03:51.231333 4146 log.go:181] (0x40004be0a0) (3) Data frame handling\nI1118 08:03:51.231468 4146 log.go:181] (0x40004be0a0) (3) Data frame sent\nI1118 08:03:51.232190 4146 log.go:181] (0x40000c0a50) Data frame received for 5\nI1118 08:03:51.232435 4146 log.go:181] (0x40005080a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.16.241:80/\nI1118 08:03:51.232542 4146 log.go:181] (0x40000c0a50) Data frame received for 3\nI1118 08:03:51.232666 4146 log.go:181] (0x40004be0a0) (3) Data frame handling\nI1118 08:03:51.232758 4146 log.go:181] (0x40005080a0) (5) Data frame sent\nI1118 08:03:51.233056 4146 log.go:181] (0x40004be0a0) (3) Data frame sent\nI1118 08:03:51.236479 4146 log.go:181] (0x40000c0a50) Data frame received for 3\nI1118 08:03:51.236645 4146 log.go:181] (0x40004be0a0) (3) Data frame handling\nI1118 08:03:51.236918 4146 log.go:181] (0x40004be0a0) (3) Data frame sent\nI1118 08:03:51.237645 4146 log.go:181] (0x40000c0a50) Data frame received for 3\nI1118 08:03:51.237768 4146 log.go:181] (0x40004be0a0) (3) Data frame handling\nI1118 08:03:51.237855 4146 log.go:181] (0x40004be0a0) (3) Data frame sent\nI1118 08:03:51.237964 4146 log.go:181] (0x40000c0a50) Data frame received for 5\nI1118 08:03:51.238111 4146 log.go:181] (0x40005080a0) (5) Data frame handling\nI1118 08:03:51.238232 4146 log.go:181] (0x40005080a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.16.241:80/\nI1118 08:03:51.243773 4146 log.go:181] (0x40000c0a50) Data frame received for 3\nI1118 08:03:51.243889 4146 log.go:181] (0x40004be0a0) (3) Data frame handling\nI1118 08:03:51.244030 4146 log.go:181] (0x40004be0a0) (3) Data frame sent\nI1118 08:03:51.244732 4146 log.go:181] (0x40000c0a50) Data frame received for 3\nI1118 08:03:51.244942 4146 log.go:181] (0x40004be0a0) (3) Data frame handling\nI1118 08:03:51.245071 4146 log.go:181] (0x40004be0a0) (3) Data frame sent\nI1118 08:03:51.245157 4146 log.go:181] (0x40000c0a50) Data frame received for 5\nI1118 08:03:51.245220 4146 log.go:181] (0x40005080a0) (5) Data frame handling\nI1118 08:03:51.245293 4146 log.go:181] (0x40005080a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.16.241:80/\nI1118 08:03:51.250871 4146 log.go:181] (0x40000c0a50) Data frame received for 3\nI1118 08:03:51.251016 4146 log.go:181] (0x40004be0a0) (3) Data frame handling\nI1118 08:03:51.251162 4146 log.go:181] (0x40004be0a0) (3) Data frame sent\nI1118 08:03:51.251312 4146 log.go:181] (0x40000c0a50) Data frame received for 3\nI1118 08:03:51.251466 4146 log.go:181] (0x40004be0a0) (3) Data frame handling\nI1118 08:03:51.251615 4146 log.go:181] (0x40000c0a50) Data frame received for 5\nI1118 08:03:51.251750 4146 log.go:181] (0x40005080a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.16.241:80/\nI1118 08:03:51.251857 4146 log.go:181] (0x40004be0a0) (3) Data frame sent\nI1118 08:03:51.251988 4146 log.go:181] (0x40005080a0) (5) Data frame sent\nI1118 08:03:51.257238 4146 log.go:181] (0x40000c0a50) Data frame received for 3\nI1118 08:03:51.257334 4146 log.go:181] (0x40004be0a0) (3) Data frame handling\nI1118 08:03:51.257458 4146 log.go:181] (0x40004be0a0) (3) Data frame sent\nI1118 08:03:51.257813 4146 log.go:181] (0x40000c0a50) Data frame received for 5\nI1118 08:03:51.257959 4146 log.go:181] (0x40005080a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.16.241:80/\nI1118 08:03:51.258080 4146 log.go:181] (0x40000c0a50) Data frame received for 3\nI1118 08:03:51.258234 4146 log.go:181] (0x40004be0a0) (3) Data frame handling\nI1118 08:03:51.258355 4146 log.go:181] (0x40004be0a0) (3) Data frame sent\nI1118 08:03:51.258448 4146 log.go:181] (0x40005080a0) (5) Data frame sent\nI1118 08:03:51.261775 4146 log.go:181] (0x40000c0a50) Data frame received for 3\nI1118 08:03:51.261899 4146 log.go:181] (0x40004be0a0) (3) Data frame handling\nI1118 08:03:51.262050 4146 log.go:181] (0x40004be0a0) (3) Data frame sent\nI1118 08:03:51.262380 4146 log.go:181] (0x40000c0a50) Data frame received for 5\nI1118 08:03:51.262518 4146 log.go:181] (0x40005080a0) (5) Data frame handling\nI1118 08:03:51.262663 4146 log.go:181] (0x40005080a0) (5) Data frame sent\nI1118 08:03:51.262814 4146 log.go:181] (0x40000c0a50) Data frame received for 5\nI1118 08:03:51.262942 4146 log.go:181] (0x40005080a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.16.241:80/\nI1118 08:03:51.263987 4146 log.go:181] (0x40000c0a50) Data frame received for 3\nI1118 08:03:51.264090 4146 log.go:181] (0x40005080a0) (5) Data frame sent\nI1118 08:03:51.264222 4146 log.go:181] (0x40004be0a0) (3) Data frame handling\nI1118 08:03:51.264347 4146 log.go:181] (0x40004be0a0) (3) Data frame sent\nI1118 08:03:51.266847 4146 log.go:181] (0x40000c0a50) Data frame received for 3\nI1118 08:03:51.266922 4146 log.go:181] (0x40004be0a0) (3) Data frame handling\nI1118 08:03:51.266989 4146 log.go:181] (0x40004be0a0) (3) Data frame sent\nI1118 08:03:51.267405 4146 log.go:181] (0x40000c0a50) Data frame received for 5\nI1118 08:03:51.267478 4146 log.go:181] (0x40005080a0) (5) Data frame handling\nI1118 08:03:51.267534 4146 log.go:181] (0x40005080a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.16.241:80/\nI1118 08:03:51.267615 4146 log.go:181] (0x40000c0a50) Data frame received for 3\nI1118 08:03:51.267725 4146 log.go:181] (0x40004be0a0) (3) Data frame handling\nI1118 08:03:51.267847 4146 log.go:181] (0x40004be0a0) (3) Data frame sent\nI1118 08:03:51.273086 4146 log.go:181] (0x40000c0a50) Data frame received for 3\nI1118 08:03:51.273197 4146 log.go:181] (0x40004be0a0) (3) Data frame handling\nI1118 08:03:51.273320 4146 log.go:181] (0x40004be0a0) (3) Data frame sent\nI1118 08:03:51.274093 4146 log.go:181] (0x40000c0a50) Data frame received for 3\nI1118 08:03:51.274196 4146 log.go:181] (0x40004be0a0) (3) Data frame handling\nI1118 08:03:51.274309 4146 log.go:181] (0x40000c0a50) Data frame received for 5\nI1118 08:03:51.274438 4146 log.go:181] (0x40005080a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.16.241:80/\nI1118 08:03:51.274528 4146 log.go:181] (0x40004be0a0) (3) Data frame sent\nI1118 08:03:51.274638 4146 log.go:181] (0x40005080a0) (5) Data frame sent\nI1118 08:03:51.278246 4146 log.go:181] (0x40000c0a50) Data frame received for 3\nI1118 08:03:51.278377 4146 log.go:181] (0x40004be0a0) (3) Data frame handling\nI1118 08:03:51.278526 4146 log.go:181] (0x40004be0a0) (3) Data frame sent\nI1118 08:03:51.279017 4146 log.go:181] (0x40000c0a50) Data frame received for 3\nI1118 08:03:51.279176 4146 log.go:181] (0x40004be0a0) (3) Data frame handling\nI1118 08:03:51.279324 4146 log.go:181] (0x40004be0a0) (3) Data frame sent\nI1118 08:03:51.279521 4146 log.go:181] (0x40000c0a50) Data frame received for 5\nI1118 08:03:51.279796 4146 log.go:181] (0x40005080a0) (5) Data frame handling\nI1118 08:03:51.279978 4146 log.go:181] (0x40005080a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.108.16.241:80/\nI1118 08:03:51.285005 4146 log.go:181] (0x40000c0a50) Data frame received for 3\nI1118 08:03:51.285145 4146 log.go:181] (0x40004be0a0) (3) Data frame handling\nI1118 08:03:51.285317 4146 log.go:181] (0x40004be0a0) (3) Data frame sent\nI1118 08:03:51.285967 4146 log.go:181] (0x40000c0a50) Data frame received for 5\nI1118 08:03:51.286032 4146 log.go:181] (0x40005080a0) (5) Data frame handling\nI1118 08:03:51.286151 4146 log.go:181] (0x40000c0a50) Data frame received for 3\nI1118 08:03:51.286236 4146 log.go:181] (0x40004be0a0) (3) Data frame handling\nI1118 08:03:51.288234 4146 log.go:181] (0x40000c0a50) Data frame received for 1\nI1118 08:03:51.288315 4146 log.go:181] (0x4000c7e460) (1) Data frame handling\nI1118 08:03:51.288388 4146 log.go:181] (0x4000c7e460) (1) Data frame sent\nI1118 08:03:51.289358 4146 log.go:181] (0x40000c0a50) (0x4000c7e460) Stream removed, broadcasting: 1\nI1118 08:03:51.292822 4146 log.go:181] (0x40000c0a50) Go away received\nI1118 08:03:51.295624 4146 log.go:181] (0x40000c0a50) (0x4000c7e460) Stream removed, broadcasting: 1\nI1118 08:03:51.296065 4146 log.go:181] (0x40000c0a50) (0x40004be0a0) Stream removed, broadcasting: 3\nI1118 08:03:51.296664 4146 log.go:181] (0x40000c0a50) (0x40005080a0) Stream removed, broadcasting: 5\n" Nov 18 08:03:51.314: INFO: stdout: "\naffinity-clusterip-transition-hch8v\naffinity-clusterip-transition-hch8v\naffinity-clusterip-transition-hch8v\naffinity-clusterip-transition-hch8v\naffinity-clusterip-transition-hch8v\naffinity-clusterip-transition-hch8v\naffinity-clusterip-transition-hch8v\naffinity-clusterip-transition-hch8v\naffinity-clusterip-transition-hch8v\naffinity-clusterip-transition-hch8v\naffinity-clusterip-transition-hch8v\naffinity-clusterip-transition-hch8v\naffinity-clusterip-transition-hch8v\naffinity-clusterip-transition-hch8v\naffinity-clusterip-transition-hch8v\naffinity-clusterip-transition-hch8v" Nov 18 08:03:51.314: INFO: Received response from host: affinity-clusterip-transition-hch8v Nov 18 08:03:51.314: INFO: Received response from host: affinity-clusterip-transition-hch8v Nov 18 08:03:51.314: INFO: Received response from host: affinity-clusterip-transition-hch8v Nov 18 08:03:51.314: INFO: Received response from host: affinity-clusterip-transition-hch8v Nov 18 08:03:51.314: INFO: Received response from host: affinity-clusterip-transition-hch8v Nov 18 08:03:51.314: INFO: Received response from host: affinity-clusterip-transition-hch8v Nov 18 08:03:51.314: INFO: Received response from host: affinity-clusterip-transition-hch8v Nov 18 08:03:51.314: INFO: Received response from host: affinity-clusterip-transition-hch8v Nov 18 08:03:51.314: INFO: Received response from host: affinity-clusterip-transition-hch8v Nov 18 08:03:51.314: INFO: Received response from host: affinity-clusterip-transition-hch8v Nov 18 08:03:51.314: INFO: Received response from host: affinity-clusterip-transition-hch8v Nov 18 08:03:51.314: INFO: Received response from host: affinity-clusterip-transition-hch8v Nov 18 08:03:51.315: INFO: Received response from host: affinity-clusterip-transition-hch8v Nov 18 08:03:51.315: INFO: Received response from host: affinity-clusterip-transition-hch8v Nov 18 08:03:51.315: INFO: Received response from host: affinity-clusterip-transition-hch8v Nov 18 08:03:51.315: INFO: Received response from host: affinity-clusterip-transition-hch8v Nov 18 08:03:51.315: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-2334, will wait for the garbage collector to delete the pods Nov 18 08:03:51.463: INFO: Deleting ReplicationController affinity-clusterip-transition took: 30.428666ms Nov 18 08:03:51.964: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 500.729842ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 08:03:58.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2334" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:25.761 seconds] [sig-network] Services /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":303,"completed":280,"skipped":4611,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 08:03:58.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 08:04:02.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7195" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":303,"completed":281,"skipped":4622,"failed":0} ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Lease /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 08:04:02.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Lease /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 08:04:03.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-7663" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":303,"completed":282,"skipped":4622,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 08:04:03.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 18 08:04:03.328: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 08:04:03.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4786" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":303,"completed":283,"skipped":4634,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 08:04:04.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Nov 18 08:04:04.534: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 08:04:12.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1758" for this suite. • [SLOW TEST:8.595 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":303,"completed":284,"skipped":4653,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 08:04:12.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-4f3f7eb7-f199-4a4f-87b1-a88ea9a793e1 in namespace container-probe-7913 Nov 18 08:04:19.094: INFO: Started pod busybox-4f3f7eb7-f199-4a4f-87b1-a88ea9a793e1 in namespace container-probe-7913 STEP: checking the pod's current state and verifying that restartCount is present Nov 18 08:04:19.105: INFO: Initial restart count of pod busybox-4f3f7eb7-f199-4a4f-87b1-a88ea9a793e1 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 08:08:20.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7913" for this suite. • [SLOW TEST:247.629 seconds] [k8s.io] Probing container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":303,"completed":285,"skipped":4674,"failed":0} SSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 08:08:20.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-7b7407a7-5bcc-46f4-a6a7-06f01088f2d3 in namespace container-probe-9954 Nov 18 08:08:24.886: INFO: Started pod liveness-7b7407a7-5bcc-46f4-a6a7-06f01088f2d3 in namespace container-probe-9954 STEP: checking the pod's current state and verifying that restartCount is present Nov 18 08:08:24.907: INFO: Initial restart count of pod liveness-7b7407a7-5bcc-46f4-a6a7-06f01088f2d3 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 08:12:25.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9954" for this suite. • [SLOW TEST:244.853 seconds] [k8s.io] Probing container /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":303,"completed":286,"skipped":4679,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 08:12:25.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 18 08:12:25.759: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Nov 18 08:12:25.770: INFO: Number of nodes with available pods: 0 Nov 18 08:12:25.770: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Nov 18 08:12:25.826: INFO: Number of nodes with available pods: 0 Nov 18 08:12:25.827: INFO: Node leguer-worker is running more than one daemon pod Nov 18 08:12:26.834: INFO: Number of nodes with available pods: 0 Nov 18 08:12:26.834: INFO: Node leguer-worker is running more than one daemon pod Nov 18 08:12:27.850: INFO: Number of nodes with available pods: 0 Nov 18 08:12:27.850: INFO: Node leguer-worker is running more than one daemon pod Nov 18 08:12:28.835: INFO: Number of nodes with available pods: 0 Nov 18 08:12:28.835: INFO: Node leguer-worker is running more than one daemon pod Nov 18 08:12:29.833: INFO: Number of nodes with available pods: 1 Nov 18 08:12:29.833: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Nov 18 08:12:29.929: INFO: Number of nodes with available pods: 1 Nov 18 08:12:29.929: INFO: Number of running nodes: 0, number of available pods: 1 Nov 18 08:12:30.937: INFO: Number of nodes with available pods: 0 Nov 18 08:12:30.938: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Nov 18 08:12:30.967: INFO: Number of nodes with available pods: 0 Nov 18 08:12:30.967: INFO: Node leguer-worker is running more than one daemon pod Nov 18 08:12:31.977: INFO: Number of nodes with available pods: 0 Nov 18 08:12:31.977: INFO: Node leguer-worker is running more than one daemon pod Nov 18 08:12:32.976: INFO: Number of nodes with available pods: 0 Nov 18 08:12:32.976: INFO: Node leguer-worker is running more than one daemon pod Nov 18 08:12:33.976: INFO: Number of nodes with available pods: 0 Nov 18 08:12:33.976: INFO: Node leguer-worker is running more than one daemon pod Nov 18 08:12:34.974: INFO: Number of nodes with available pods: 0 Nov 18 08:12:34.974: INFO: Node leguer-worker is running more than one daemon pod Nov 18 08:12:35.974: INFO: Number of nodes with available pods: 0 Nov 18 08:12:35.974: INFO: Node leguer-worker is running more than one daemon pod Nov 18 08:12:36.975: INFO: Number of nodes with available pods: 0 Nov 18 08:12:36.976: INFO: Node leguer-worker is running more than one daemon pod Nov 18 08:12:37.975: INFO: Number of nodes with available pods: 0 Nov 18 08:12:37.975: INFO: Node leguer-worker is running more than one daemon pod Nov 18 08:12:39.032: INFO: Number of nodes with available pods: 0 Nov 18 08:12:39.032: INFO: Node leguer-worker is running more than one daemon pod Nov 18 08:12:39.976: INFO: Number of nodes with available pods: 0 Nov 18 08:12:39.976: INFO: Node leguer-worker is running more than one daemon pod Nov 18 08:12:40.974: INFO: Number of nodes with available pods: 0 Nov 18 08:12:40.975: INFO: Node leguer-worker is running more than one daemon pod Nov 18 08:12:41.976: INFO: Number of nodes with available pods: 0 Nov 18 08:12:41.976: INFO: Node leguer-worker is running more than one daemon pod Nov 18 08:12:42.974: INFO: Number of nodes with available pods: 0 Nov 18 08:12:42.974: INFO: Node leguer-worker is running more than one daemon pod Nov 18 08:12:43.976: INFO: Number of nodes with available pods: 1 Nov 18 08:12:43.976: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7798, will wait for the garbage collector to delete the pods Nov 18 08:12:44.051: INFO: Deleting DaemonSet.extensions daemon-set took: 9.296442ms Nov 18 08:12:44.452: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.584022ms Nov 18 08:12:50.363: INFO: Number of nodes with available pods: 0 Nov 18 08:12:50.363: INFO: Number of running nodes: 0, number of available pods: 0 Nov 18 08:12:50.369: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7798/daemonsets","resourceVersion":"12015081"},"items":null} Nov 18 08:12:50.375: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7798/pods","resourceVersion":"12015081"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 08:12:50.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7798" for this suite. • [SLOW TEST:25.268 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":303,"completed":287,"skipped":4704,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 08:12:50.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Nov 18 08:12:50.632: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f6c347b8-ae3d-42da-a8cd-1d23f2fc2365" in namespace "projected-2287" to be "Succeeded or Failed" Nov 18 08:12:50.660: INFO: Pod "downwardapi-volume-f6c347b8-ae3d-42da-a8cd-1d23f2fc2365": Phase="Pending", Reason="", readiness=false. Elapsed: 27.437747ms Nov 18 08:12:52.667: INFO: Pod "downwardapi-volume-f6c347b8-ae3d-42da-a8cd-1d23f2fc2365": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034574183s Nov 18 08:12:54.675: INFO: Pod "downwardapi-volume-f6c347b8-ae3d-42da-a8cd-1d23f2fc2365": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043004878s STEP: Saw pod success Nov 18 08:12:54.675: INFO: Pod "downwardapi-volume-f6c347b8-ae3d-42da-a8cd-1d23f2fc2365" satisfied condition "Succeeded or Failed" Nov 18 08:12:54.681: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-f6c347b8-ae3d-42da-a8cd-1d23f2fc2365 container client-container: STEP: delete the pod Nov 18 08:12:54.782: INFO: Waiting for pod downwardapi-volume-f6c347b8-ae3d-42da-a8cd-1d23f2fc2365 to disappear Nov 18 08:12:54.875: INFO: Pod downwardapi-volume-f6c347b8-ae3d-42da-a8cd-1d23f2fc2365 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 08:12:54.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2287" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":288,"skipped":4721,"failed":0} SSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 08:12:54.893: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 18 08:12:54.973: INFO: Creating ReplicaSet my-hostname-basic-503cedd0-6669-4aa7-be67-8f43f5d8a44c Nov 18 08:12:55.007: INFO: Pod name my-hostname-basic-503cedd0-6669-4aa7-be67-8f43f5d8a44c: Found 0 pods out of 1 Nov 18 08:13:00.015: INFO: Pod name my-hostname-basic-503cedd0-6669-4aa7-be67-8f43f5d8a44c: Found 1 pods out of 1 Nov 18 08:13:00.015: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-503cedd0-6669-4aa7-be67-8f43f5d8a44c" is running Nov 18 08:13:00.021: INFO: Pod "my-hostname-basic-503cedd0-6669-4aa7-be67-8f43f5d8a44c-8hk6j" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-11-18 08:12:55 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-11-18 08:12:58 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-11-18 08:12:58 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-11-18 08:12:55 +0000 UTC Reason: Message:}]) Nov 18 08:13:00.024: INFO: Trying to dial the pod Nov 18 08:13:05.042: INFO: Controller my-hostname-basic-503cedd0-6669-4aa7-be67-8f43f5d8a44c: Got expected result from replica 1 [my-hostname-basic-503cedd0-6669-4aa7-be67-8f43f5d8a44c-8hk6j]: "my-hostname-basic-503cedd0-6669-4aa7-be67-8f43f5d8a44c-8hk6j", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 08:13:05.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-9065" for this suite. • [SLOW TEST:10.167 seconds] [sig-apps] ReplicaSet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":303,"completed":289,"skipped":4728,"failed":0} SSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 08:13:05.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-7250 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating stateful set ss in namespace statefulset-7250 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7250 Nov 18 08:13:05.230: INFO: Found 0 stateful pods, waiting for 1 Nov 18 08:13:15.239: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Nov 18 08:13:15.246: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7250 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 18 08:13:19.942: INFO: stderr: "I1118 08:13:19.772148 4166 log.go:181] (0x400003a160) (0x400017e000) Create stream\nI1118 08:13:19.780732 4166 log.go:181] (0x400003a160) (0x400017e000) Stream added, broadcasting: 1\nI1118 08:13:19.790272 4166 log.go:181] (0x400003a160) Reply frame received for 1\nI1118 08:13:19.791011 4166 log.go:181] (0x400003a160) (0x400017e0a0) Create stream\nI1118 08:13:19.791084 4166 log.go:181] (0x400003a160) (0x400017e0a0) Stream added, broadcasting: 3\nI1118 08:13:19.792184 4166 log.go:181] (0x400003a160) Reply frame received for 3\nI1118 08:13:19.792382 4166 log.go:181] (0x400003a160) (0x4000a3d4a0) Create stream\nI1118 08:13:19.792432 4166 log.go:181] (0x400003a160) (0x4000a3d4a0) Stream added, broadcasting: 5\nI1118 08:13:19.793612 4166 log.go:181] (0x400003a160) Reply frame received for 5\nI1118 08:13:19.879428 4166 log.go:181] (0x400003a160) Data frame received for 5\nI1118 08:13:19.879706 4166 log.go:181] (0x4000a3d4a0) (5) Data frame handling\nI1118 08:13:19.880327 4166 log.go:181] (0x4000a3d4a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1118 08:13:19.922601 4166 log.go:181] (0x400003a160) Data frame received for 3\nI1118 08:13:19.922870 4166 log.go:181] (0x400017e0a0) (3) Data frame handling\nI1118 08:13:19.923070 4166 log.go:181] (0x400017e0a0) (3) Data frame sent\nI1118 08:13:19.923204 4166 log.go:181] (0x400003a160) Data frame received for 3\nI1118 08:13:19.923332 4166 log.go:181] (0x400017e0a0) (3) Data frame handling\nI1118 08:13:19.923558 4166 log.go:181] (0x400003a160) Data frame received for 5\nI1118 08:13:19.923694 4166 log.go:181] (0x4000a3d4a0) (5) Data frame handling\nI1118 08:13:19.924462 4166 log.go:181] (0x400003a160) Data frame received for 1\nI1118 08:13:19.924550 4166 log.go:181] (0x400017e000) (1) Data frame handling\nI1118 08:13:19.924623 4166 log.go:181] (0x400017e000) (1) Data frame sent\nI1118 08:13:19.926256 4166 log.go:181] (0x400003a160) (0x400017e000) Stream removed, broadcasting: 1\nI1118 08:13:19.928381 4166 log.go:181] (0x400003a160) Go away received\nI1118 08:13:19.931656 4166 log.go:181] (0x400003a160) (0x400017e000) Stream removed, broadcasting: 1\nI1118 08:13:19.932031 4166 log.go:181] (0x400003a160) (0x400017e0a0) Stream removed, broadcasting: 3\nI1118 08:13:19.932311 4166 log.go:181] (0x400003a160) (0x4000a3d4a0) Stream removed, broadcasting: 5\n" Nov 18 08:13:19.943: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Nov 18 08:13:19.943: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Nov 18 08:13:19.960: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Nov 18 08:13:29.968: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Nov 18 08:13:29.968: INFO: Waiting for statefulset status.replicas updated to 0 Nov 18 08:13:29.989: INFO: POD NODE PHASE GRACE CONDITIONS Nov 18 08:13:29.991: INFO: ss-0 leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:05 +0000 UTC }] Nov 18 08:13:29.991: INFO: Nov 18 08:13:29.991: INFO: StatefulSet ss has not reached scale 3, at 1 Nov 18 08:13:31.059: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.990332458s Nov 18 08:13:32.069: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.923014337s Nov 18 08:13:33.130: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.912827589s Nov 18 08:13:34.142: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.852219602s Nov 18 08:13:35.176: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.84020288s Nov 18 08:13:36.201: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.805716403s Nov 18 08:13:37.214: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.780842731s Nov 18 08:13:38.226: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.768341834s Nov 18 08:13:39.239: INFO: Verifying statefulset ss doesn't scale past 3 for another 756.226642ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7250 Nov 18 08:13:40.250: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7250 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 18 08:13:42.035: INFO: stderr: "I1118 08:13:41.878489 4187 log.go:181] (0x4000123810) (0x40007d0500) Create stream\nI1118 08:13:41.886272 4187 log.go:181] (0x4000123810) (0x40007d0500) Stream added, broadcasting: 1\nI1118 08:13:41.909328 4187 log.go:181] (0x4000123810) Reply frame received for 1\nI1118 08:13:41.909980 4187 log.go:181] (0x4000123810) (0x40007d0000) Create stream\nI1118 08:13:41.910061 4187 log.go:181] (0x4000123810) (0x40007d0000) Stream added, broadcasting: 3\nI1118 08:13:41.911633 4187 log.go:181] (0x4000123810) Reply frame received for 3\nI1118 08:13:41.911917 4187 log.go:181] (0x4000123810) (0x40007d00a0) Create stream\nI1118 08:13:41.911982 4187 log.go:181] (0x4000123810) (0x40007d00a0) Stream added, broadcasting: 5\nI1118 08:13:41.913305 4187 log.go:181] (0x4000123810) Reply frame received for 5\nI1118 08:13:42.017135 4187 log.go:181] (0x4000123810) Data frame received for 3\nI1118 08:13:42.017555 4187 log.go:181] (0x4000123810) Data frame received for 5\nI1118 08:13:42.018062 4187 log.go:181] (0x40007d00a0) (5) Data frame handling\nI1118 08:13:42.018597 4187 log.go:181] (0x40007d0000) (3) Data frame handling\nI1118 08:13:42.018955 4187 log.go:181] (0x4000123810) Data frame received for 1\nI1118 08:13:42.019047 4187 log.go:181] (0x40007d0500) (1) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI1118 08:13:42.020431 4187 log.go:181] (0x40007d0000) (3) Data frame sent\nI1118 08:13:42.020919 4187 log.go:181] (0x40007d00a0) (5) Data frame sent\nI1118 08:13:42.021346 4187 log.go:181] (0x4000123810) Data frame received for 5\nI1118 08:13:42.021437 4187 log.go:181] (0x40007d00a0) (5) Data frame handling\nI1118 08:13:42.021510 4187 log.go:181] (0x4000123810) Data frame received for 3\nI1118 08:13:42.021614 4187 log.go:181] (0x40007d0000) (3) Data frame handling\nI1118 08:13:42.021724 4187 log.go:181] (0x40007d0500) (1) Data frame sent\nI1118 08:13:42.023138 4187 log.go:181] (0x4000123810) (0x40007d0500) Stream removed, broadcasting: 1\nI1118 08:13:42.023690 4187 log.go:181] (0x4000123810) Go away received\nI1118 08:13:42.026509 4187 log.go:181] (0x4000123810) (0x40007d0500) Stream removed, broadcasting: 1\nI1118 08:13:42.026763 4187 log.go:181] (0x4000123810) (0x40007d0000) Stream removed, broadcasting: 3\nI1118 08:13:42.026910 4187 log.go:181] (0x4000123810) (0x40007d00a0) Stream removed, broadcasting: 5\n" Nov 18 08:13:42.036: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Nov 18 08:13:42.036: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Nov 18 08:13:42.036: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7250 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 18 08:13:43.649: INFO: stderr: "I1118 08:13:43.520015 4208 log.go:181] (0x40008b0000) (0x400016e5a0) Create stream\nI1118 08:13:43.525701 4208 log.go:181] (0x40008b0000) (0x400016e5a0) Stream added, broadcasting: 1\nI1118 08:13:43.539284 4208 log.go:181] (0x40008b0000) Reply frame received for 1\nI1118 08:13:43.540037 4208 log.go:181] (0x40008b0000) (0x4000aa46e0) Create stream\nI1118 08:13:43.540113 4208 log.go:181] (0x40008b0000) (0x4000aa46e0) Stream added, broadcasting: 3\nI1118 08:13:43.541757 4208 log.go:181] (0x40008b0000) Reply frame received for 3\nI1118 08:13:43.542033 4208 log.go:181] (0x40008b0000) (0x4000aa5b80) Create stream\nI1118 08:13:43.542094 4208 log.go:181] (0x40008b0000) (0x4000aa5b80) Stream added, broadcasting: 5\nI1118 08:13:43.543448 4208 log.go:181] (0x40008b0000) Reply frame received for 5\nI1118 08:13:43.628403 4208 log.go:181] (0x40008b0000) Data frame received for 3\nI1118 08:13:43.628717 4208 log.go:181] (0x40008b0000) Data frame received for 5\nI1118 08:13:43.628988 4208 log.go:181] (0x4000aa5b80) (5) Data frame handling\nI1118 08:13:43.629253 4208 log.go:181] (0x4000aa46e0) (3) Data frame handling\nI1118 08:13:43.629529 4208 log.go:181] (0x40008b0000) Data frame received for 1\nI1118 08:13:43.629639 4208 log.go:181] (0x400016e5a0) (1) Data frame handling\nI1118 08:13:43.630380 4208 log.go:181] (0x400016e5a0) (1) Data frame sent\nI1118 08:13:43.630499 4208 log.go:181] (0x4000aa5b80) (5) Data frame sent\nI1118 08:13:43.630775 4208 log.go:181] (0x4000aa46e0) (3) Data frame sent\nI1118 08:13:43.630852 4208 log.go:181] (0x40008b0000) Data frame received for 3\nI1118 08:13:43.630905 4208 log.go:181] (0x4000aa46e0) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI1118 08:13:43.631608 4208 log.go:181] (0x40008b0000) Data frame received for 5\nI1118 08:13:43.631712 4208 log.go:181] (0x4000aa5b80) (5) Data frame handling\nI1118 08:13:43.633265 4208 log.go:181] (0x40008b0000) (0x400016e5a0) Stream removed, broadcasting: 1\nI1118 08:13:43.635932 4208 log.go:181] (0x40008b0000) Go away received\nI1118 08:13:43.639165 4208 log.go:181] (0x40008b0000) (0x400016e5a0) Stream removed, broadcasting: 1\nI1118 08:13:43.639454 4208 log.go:181] (0x40008b0000) (0x4000aa46e0) Stream removed, broadcasting: 3\nI1118 08:13:43.639648 4208 log.go:181] (0x40008b0000) (0x4000aa5b80) Stream removed, broadcasting: 5\n" Nov 18 08:13:43.650: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Nov 18 08:13:43.650: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Nov 18 08:13:43.650: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7250 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Nov 18 08:13:45.301: INFO: stderr: "I1118 08:13:45.154024 4229 log.go:181] (0x400002d080) (0x4000b38640) Create stream\nI1118 08:13:45.156349 4229 log.go:181] (0x400002d080) (0x4000b38640) Stream added, broadcasting: 1\nI1118 08:13:45.178312 4229 log.go:181] (0x400002d080) Reply frame received for 1\nI1118 08:13:45.179162 4229 log.go:181] (0x400002d080) (0x40005c8000) Create stream\nI1118 08:13:45.179254 4229 log.go:181] (0x400002d080) (0x40005c8000) Stream added, broadcasting: 3\nI1118 08:13:45.180940 4229 log.go:181] (0x400002d080) Reply frame received for 3\nI1118 08:13:45.181246 4229 log.go:181] (0x400002d080) (0x40005c80a0) Create stream\nI1118 08:13:45.181322 4229 log.go:181] (0x400002d080) (0x40005c80a0) Stream added, broadcasting: 5\nI1118 08:13:45.182529 4229 log.go:181] (0x400002d080) Reply frame received for 5\nI1118 08:13:45.279233 4229 log.go:181] (0x400002d080) Data frame received for 5\nI1118 08:13:45.279486 4229 log.go:181] (0x40005c80a0) (5) Data frame handling\nI1118 08:13:45.280026 4229 log.go:181] (0x400002d080) Data frame received for 3\nI1118 08:13:45.280176 4229 log.go:181] (0x40005c8000) (3) Data frame handling\nI1118 08:13:45.280347 4229 log.go:181] (0x40005c80a0) (5) Data frame sent\nI1118 08:13:45.280687 4229 log.go:181] (0x400002d080) Data frame received for 1\nI1118 08:13:45.281079 4229 log.go:181] (0x4000b38640) (1) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI1118 08:13:45.281243 4229 log.go:181] (0x4000b38640) (1) Data frame sent\nI1118 08:13:45.281468 4229 log.go:181] (0x40005c8000) (3) Data frame sent\nI1118 08:13:45.282080 4229 log.go:181] (0x400002d080) Data frame received for 3\nI1118 08:13:45.282179 4229 log.go:181] (0x40005c8000) (3) Data frame handling\nI1118 08:13:45.282349 4229 log.go:181] (0x400002d080) Data frame received for 5\nI1118 08:13:45.282475 4229 log.go:181] (0x40005c80a0) (5) Data frame handling\nI1118 08:13:45.284826 4229 log.go:181] (0x400002d080) (0x4000b38640) Stream removed, broadcasting: 1\nI1118 08:13:45.287729 4229 log.go:181] (0x400002d080) Go away received\nI1118 08:13:45.290631 4229 log.go:181] (0x400002d080) (0x4000b38640) Stream removed, broadcasting: 1\nI1118 08:13:45.291187 4229 log.go:181] (0x400002d080) (0x40005c8000) Stream removed, broadcasting: 3\nI1118 08:13:45.291429 4229 log.go:181] (0x400002d080) (0x40005c80a0) Stream removed, broadcasting: 5\n" Nov 18 08:13:45.302: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Nov 18 08:13:45.302: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Nov 18 08:13:45.311: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Nov 18 08:13:45.311: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Nov 18 08:13:45.311: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Nov 18 08:13:45.317: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7250 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 18 08:13:46.931: INFO: stderr: "I1118 08:13:46.774784 4250 log.go:181] (0x400003a0b0) (0x400037c000) Create stream\nI1118 08:13:46.779018 4250 log.go:181] (0x400003a0b0) (0x400037c000) Stream added, broadcasting: 1\nI1118 08:13:46.792905 4250 log.go:181] (0x400003a0b0) Reply frame received for 1\nI1118 08:13:46.793559 4250 log.go:181] (0x400003a0b0) (0x4000630000) Create stream\nI1118 08:13:46.793628 4250 log.go:181] (0x400003a0b0) (0x4000630000) Stream added, broadcasting: 3\nI1118 08:13:46.795282 4250 log.go:181] (0x400003a0b0) Reply frame received for 3\nI1118 08:13:46.795582 4250 log.go:181] (0x400003a0b0) (0x40009a2000) Create stream\nI1118 08:13:46.795657 4250 log.go:181] (0x400003a0b0) (0x40009a2000) Stream added, broadcasting: 5\nI1118 08:13:46.797568 4250 log.go:181] (0x400003a0b0) Reply frame received for 5\nI1118 08:13:46.906098 4250 log.go:181] (0x400003a0b0) Data frame received for 3\nI1118 08:13:46.906668 4250 log.go:181] (0x400003a0b0) Data frame received for 5\nI1118 08:13:46.906821 4250 log.go:181] (0x40009a2000) (5) Data frame handling\nI1118 08:13:46.907045 4250 log.go:181] (0x4000630000) (3) Data frame handling\nI1118 08:13:46.907688 4250 log.go:181] (0x40009a2000) (5) Data frame sent\nI1118 08:13:46.907848 4250 log.go:181] (0x400003a0b0) Data frame received for 1\nI1118 08:13:46.908026 4250 log.go:181] (0x4000630000) (3) Data frame sent\nI1118 08:13:46.908241 4250 log.go:181] (0x400037c000) (1) Data frame handling\nI1118 08:13:46.908366 4250 log.go:181] (0x400037c000) (1) Data frame sent\nI1118 08:13:46.908461 4250 log.go:181] (0x400003a0b0) Data frame received for 5\nI1118 08:13:46.908596 4250 log.go:181] (0x40009a2000) (5) Data frame handling\nI1118 08:13:46.908770 4250 log.go:181] (0x400003a0b0) Data frame received for 3\nI1118 08:13:46.908954 4250 log.go:181] (0x4000630000) (3) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1118 08:13:46.911027 4250 log.go:181] (0x400003a0b0) (0x400037c000) Stream removed, broadcasting: 1\nI1118 08:13:46.915043 4250 log.go:181] (0x400003a0b0) Go away received\nI1118 08:13:46.917894 4250 log.go:181] (0x400003a0b0) (0x400037c000) Stream removed, broadcasting: 1\nI1118 08:13:46.918443 4250 log.go:181] (0x400003a0b0) (0x4000630000) Stream removed, broadcasting: 3\nI1118 08:13:46.919226 4250 log.go:181] (0x400003a0b0) (0x40009a2000) Stream removed, broadcasting: 5\n" Nov 18 08:13:46.932: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Nov 18 08:13:46.932: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Nov 18 08:13:46.932: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7250 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 18 08:13:48.605: INFO: stderr: "I1118 08:13:48.452723 4270 log.go:181] (0x400003b760) (0x40005f45a0) Create stream\nI1118 08:13:48.456041 4270 log.go:181] (0x400003b760) (0x40005f45a0) Stream added, broadcasting: 1\nI1118 08:13:48.474367 4270 log.go:181] (0x400003b760) Reply frame received for 1\nI1118 08:13:48.474955 4270 log.go:181] (0x400003b760) (0x40007b8aa0) Create stream\nI1118 08:13:48.475020 4270 log.go:181] (0x400003b760) (0x40007b8aa0) Stream added, broadcasting: 3\nI1118 08:13:48.476430 4270 log.go:181] (0x400003b760) Reply frame received for 3\nI1118 08:13:48.476717 4270 log.go:181] (0x400003b760) (0x40005f4000) Create stream\nI1118 08:13:48.476791 4270 log.go:181] (0x400003b760) (0x40005f4000) Stream added, broadcasting: 5\nI1118 08:13:48.477726 4270 log.go:181] (0x400003b760) Reply frame received for 5\nI1118 08:13:48.550175 4270 log.go:181] (0x400003b760) Data frame received for 5\nI1118 08:13:48.550450 4270 log.go:181] (0x40005f4000) (5) Data frame handling\nI1118 08:13:48.551078 4270 log.go:181] (0x40005f4000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1118 08:13:48.586411 4270 log.go:181] (0x400003b760) Data frame received for 3\nI1118 08:13:48.586615 4270 log.go:181] (0x400003b760) Data frame received for 5\nI1118 08:13:48.586806 4270 log.go:181] (0x40005f4000) (5) Data frame handling\nI1118 08:13:48.586928 4270 log.go:181] (0x40007b8aa0) (3) Data frame handling\nI1118 08:13:48.587103 4270 log.go:181] (0x40007b8aa0) (3) Data frame sent\nI1118 08:13:48.587222 4270 log.go:181] (0x400003b760) Data frame received for 3\nI1118 08:13:48.587397 4270 log.go:181] (0x40007b8aa0) (3) Data frame handling\nI1118 08:13:48.588060 4270 log.go:181] (0x400003b760) Data frame received for 1\nI1118 08:13:48.588149 4270 log.go:181] (0x40005f45a0) (1) Data frame handling\nI1118 08:13:48.588258 4270 log.go:181] (0x40005f45a0) (1) Data frame sent\nI1118 08:13:48.589607 4270 log.go:181] (0x400003b760) (0x40005f45a0) Stream removed, broadcasting: 1\nI1118 08:13:48.592551 4270 log.go:181] (0x400003b760) Go away received\nI1118 08:13:48.596170 4270 log.go:181] (0x400003b760) (0x40005f45a0) Stream removed, broadcasting: 1\nI1118 08:13:48.596471 4270 log.go:181] (0x400003b760) (0x40007b8aa0) Stream removed, broadcasting: 3\nI1118 08:13:48.596645 4270 log.go:181] (0x400003b760) (0x40005f4000) Stream removed, broadcasting: 5\n" Nov 18 08:13:48.606: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Nov 18 08:13:48.606: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Nov 18 08:13:48.606: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7250 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Nov 18 08:13:50.300: INFO: stderr: "I1118 08:13:50.136262 4291 log.go:181] (0x4000164000) (0x40004cc640) Create stream\nI1118 08:13:50.143659 4291 log.go:181] (0x4000164000) (0x40004cc640) Stream added, broadcasting: 1\nI1118 08:13:50.156530 4291 log.go:181] (0x4000164000) Reply frame received for 1\nI1118 08:13:50.158387 4291 log.go:181] (0x4000164000) (0x400041cc80) Create stream\nI1118 08:13:50.158579 4291 log.go:181] (0x4000164000) (0x400041cc80) Stream added, broadcasting: 3\nI1118 08:13:50.160256 4291 log.go:181] (0x4000164000) Reply frame received for 3\nI1118 08:13:50.160527 4291 log.go:181] (0x4000164000) (0x40006943c0) Create stream\nI1118 08:13:50.160611 4291 log.go:181] (0x4000164000) (0x40006943c0) Stream added, broadcasting: 5\nI1118 08:13:50.162205 4291 log.go:181] (0x4000164000) Reply frame received for 5\nI1118 08:13:50.224735 4291 log.go:181] (0x4000164000) Data frame received for 5\nI1118 08:13:50.225320 4291 log.go:181] (0x40006943c0) (5) Data frame handling\nI1118 08:13:50.226273 4291 log.go:181] (0x40006943c0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1118 08:13:50.276486 4291 log.go:181] (0x4000164000) Data frame received for 5\nI1118 08:13:50.276692 4291 log.go:181] (0x40006943c0) (5) Data frame handling\nI1118 08:13:50.277785 4291 log.go:181] (0x4000164000) Data frame received for 3\nI1118 08:13:50.277968 4291 log.go:181] (0x400041cc80) (3) Data frame handling\nI1118 08:13:50.278173 4291 log.go:181] (0x400041cc80) (3) Data frame sent\nI1118 08:13:50.278321 4291 log.go:181] (0x4000164000) Data frame received for 3\nI1118 08:13:50.278470 4291 log.go:181] (0x400041cc80) (3) Data frame handling\nI1118 08:13:50.278960 4291 log.go:181] (0x4000164000) Data frame received for 1\nI1118 08:13:50.279114 4291 log.go:181] (0x40004cc640) (1) Data frame handling\nI1118 08:13:50.279271 4291 log.go:181] (0x40004cc640) (1) Data frame sent\nI1118 08:13:50.281064 4291 log.go:181] (0x4000164000) (0x40004cc640) Stream removed, broadcasting: 1\nI1118 08:13:50.284088 4291 log.go:181] (0x4000164000) Go away received\nI1118 08:13:50.289341 4291 log.go:181] Streams opened: 2, map[spdy.StreamId]*spdystream.Stream{0x3:(*spdystream.Stream)(0x400041cc80), 0x5:(*spdystream.Stream)(0x40006943c0)}\nI1118 08:13:50.289928 4291 log.go:181] (0x4000164000) (0x40004cc640) Stream removed, broadcasting: 1\nI1118 08:13:50.290606 4291 log.go:181] (0x4000164000) (0x400041cc80) Stream removed, broadcasting: 3\nI1118 08:13:50.291096 4291 log.go:181] (0x4000164000) (0x40006943c0) Stream removed, broadcasting: 5\n" Nov 18 08:13:50.301: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Nov 18 08:13:50.301: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Nov 18 08:13:50.301: INFO: Waiting for statefulset status.replicas updated to 0 Nov 18 08:13:50.307: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Nov 18 08:14:00.325: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Nov 18 08:14:00.325: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Nov 18 08:14:00.325: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Nov 18 08:14:00.361: INFO: POD NODE PHASE GRACE CONDITIONS Nov 18 08:14:00.362: INFO: ss-0 leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:05 +0000 UTC }] Nov 18 08:14:00.362: INFO: ss-1 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:29 +0000 UTC }] Nov 18 08:14:00.362: INFO: ss-2 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:29 +0000 UTC }] Nov 18 08:14:00.363: INFO: Nov 18 08:14:00.363: INFO: StatefulSet ss has not reached scale 0, at 3 Nov 18 08:14:01.372: INFO: POD NODE PHASE GRACE CONDITIONS Nov 18 08:14:01.372: INFO: ss-0 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:05 +0000 UTC }] Nov 18 08:14:01.373: INFO: ss-1 leguer-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:29 +0000 UTC }] Nov 18 08:14:01.373: INFO: ss-2 leguer-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:29 +0000 UTC }] Nov 18 08:14:01.373: INFO: Nov 18 08:14:01.373: INFO: StatefulSet ss has not reached scale 0, at 3 Nov 18 08:14:02.384: INFO: POD NODE PHASE GRACE CONDITIONS Nov 18 08:14:02.384: INFO: ss-0 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:05 +0000 UTC }] Nov 18 08:14:02.384: INFO: ss-1 leguer-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:29 +0000 UTC }] Nov 18 08:14:02.385: INFO: ss-2 leguer-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:29 +0000 UTC }] Nov 18 08:14:02.385: INFO: Nov 18 08:14:02.385: INFO: StatefulSet ss has not reached scale 0, at 3 Nov 18 08:14:03.394: INFO: POD NODE PHASE GRACE CONDITIONS Nov 18 08:14:03.394: INFO: ss-0 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:05 +0000 UTC }] Nov 18 08:14:03.394: INFO: ss-1 leguer-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:29 +0000 UTC }] Nov 18 08:14:03.395: INFO: ss-2 leguer-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:29 +0000 UTC }] Nov 18 08:14:03.395: INFO: Nov 18 08:14:03.395: INFO: StatefulSet ss has not reached scale 0, at 3 Nov 18 08:14:04.404: INFO: POD NODE PHASE GRACE CONDITIONS Nov 18 08:14:04.404: INFO: ss-0 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:05 +0000 UTC }] Nov 18 08:14:04.404: INFO: Nov 18 08:14:04.404: INFO: StatefulSet ss has not reached scale 0, at 1 Nov 18 08:14:05.412: INFO: POD NODE PHASE GRACE CONDITIONS Nov 18 08:14:05.412: INFO: ss-0 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:05 +0000 UTC }] Nov 18 08:14:05.412: INFO: Nov 18 08:14:05.412: INFO: StatefulSet ss has not reached scale 0, at 1 Nov 18 08:14:06.422: INFO: POD NODE PHASE GRACE CONDITIONS Nov 18 08:14:06.422: INFO: ss-0 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:05 +0000 UTC }] Nov 18 08:14:06.422: INFO: Nov 18 08:14:06.423: INFO: StatefulSet ss has not reached scale 0, at 1 Nov 18 08:14:07.431: INFO: POD NODE PHASE GRACE CONDITIONS Nov 18 08:14:07.432: INFO: ss-0 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:05 +0000 UTC }] Nov 18 08:14:07.432: INFO: Nov 18 08:14:07.432: INFO: StatefulSet ss has not reached scale 0, at 1 Nov 18 08:14:08.440: INFO: POD NODE PHASE GRACE CONDITIONS Nov 18 08:14:08.440: INFO: ss-0 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:05 +0000 UTC }] Nov 18 08:14:08.440: INFO: Nov 18 08:14:08.440: INFO: StatefulSet ss has not reached scale 0, at 1 Nov 18 08:14:09.447: INFO: POD NODE PHASE GRACE CONDITIONS Nov 18 08:14:09.447: INFO: ss-0 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-18 08:13:05 +0000 UTC }] Nov 18 08:14:09.448: INFO: Nov 18 08:14:09.448: INFO: StatefulSet ss has not reached scale 0, at 1 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7250 Nov 18 08:14:10.454: INFO: Scaling statefulset ss to 0 Nov 18 08:14:10.470: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Nov 18 08:14:10.475: INFO: Deleting all statefulset in ns statefulset-7250 Nov 18 08:14:10.480: INFO: Scaling statefulset ss to 0 Nov 18 08:14:10.495: INFO: Waiting for statefulset status.replicas updated to 0 Nov 18 08:14:10.499: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 08:14:10.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7250" for this suite. • [SLOW TEST:65.470 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":303,"completed":290,"skipped":4733,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 08:14:10.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-downwardapi-wqwx STEP: Creating a pod to test atomic-volume-subpath Nov 18 08:14:10.727: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-wqwx" in namespace "subpath-595" to be "Succeeded or Failed" Nov 18 08:14:10.774: INFO: Pod "pod-subpath-test-downwardapi-wqwx": Phase="Pending", Reason="", readiness=false. Elapsed: 47.586991ms Nov 18 08:14:12.781: INFO: Pod "pod-subpath-test-downwardapi-wqwx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053912838s Nov 18 08:14:14.789: INFO: Pod "pod-subpath-test-downwardapi-wqwx": Phase="Running", Reason="", readiness=true. Elapsed: 4.061807696s Nov 18 08:14:16.795: INFO: Pod "pod-subpath-test-downwardapi-wqwx": Phase="Running", Reason="", readiness=true. Elapsed: 6.06813966s Nov 18 08:14:18.802: INFO: Pod "pod-subpath-test-downwardapi-wqwx": Phase="Running", Reason="", readiness=true. Elapsed: 8.07496395s Nov 18 08:14:20.809: INFO: Pod "pod-subpath-test-downwardapi-wqwx": Phase="Running", Reason="", readiness=true. Elapsed: 10.082094657s Nov 18 08:14:22.817: INFO: Pod "pod-subpath-test-downwardapi-wqwx": Phase="Running", Reason="", readiness=true. Elapsed: 12.090550606s Nov 18 08:14:24.826: INFO: Pod "pod-subpath-test-downwardapi-wqwx": Phase="Running", Reason="", readiness=true. Elapsed: 14.098644614s Nov 18 08:14:26.833: INFO: Pod "pod-subpath-test-downwardapi-wqwx": Phase="Running", Reason="", readiness=true. Elapsed: 16.106481109s Nov 18 08:14:28.865: INFO: Pod "pod-subpath-test-downwardapi-wqwx": Phase="Running", Reason="", readiness=true. Elapsed: 18.138613734s Nov 18 08:14:30.873: INFO: Pod "pod-subpath-test-downwardapi-wqwx": Phase="Running", Reason="", readiness=true. Elapsed: 20.145923073s Nov 18 08:14:32.881: INFO: Pod "pod-subpath-test-downwardapi-wqwx": Phase="Running", Reason="", readiness=true. Elapsed: 22.15385363s Nov 18 08:14:34.919: INFO: Pod "pod-subpath-test-downwardapi-wqwx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.192130568s STEP: Saw pod success Nov 18 08:14:34.919: INFO: Pod "pod-subpath-test-downwardapi-wqwx" satisfied condition "Succeeded or Failed" Nov 18 08:14:34.940: INFO: Trying to get logs from node leguer-worker pod pod-subpath-test-downwardapi-wqwx container test-container-subpath-downwardapi-wqwx: STEP: delete the pod Nov 18 08:14:35.011: INFO: Waiting for pod pod-subpath-test-downwardapi-wqwx to disappear Nov 18 08:14:35.067: INFO: Pod pod-subpath-test-downwardapi-wqwx no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-wqwx Nov 18 08:14:35.068: INFO: Deleting pod "pod-subpath-test-downwardapi-wqwx" in namespace "subpath-595" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 08:14:35.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-595" for this suite. • [SLOW TEST:24.556 seconds] [sig-storage] Subpath /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":303,"completed":291,"skipped":4741,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 08:14:35.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 08:14:48.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6169" for this suite. • [SLOW TEST:13.295 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":303,"completed":292,"skipped":4750,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 08:14:48.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 18 08:14:48.447: INFO: Creating deployment "test-recreate-deployment" Nov 18 08:14:48.454: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Nov 18 08:14:48.477: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Nov 18 08:14:50.533: INFO: Waiting deployment "test-recreate-deployment" to complete Nov 18 08:14:50.539: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741284088, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741284088, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741284088, loc:(*time.Location)(0x6e4d0a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741284088, loc:(*time.Location)(0x6e4d0a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-c96cf48f\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 18 08:14:52.546: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Nov 18 08:14:52.558: INFO: Updating deployment test-recreate-deployment Nov 18 08:14:52.558: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Nov 18 08:14:53.127: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-5260 /apis/apps/v1/namespaces/deployment-5260/deployments/test-recreate-deployment 6e1abedb-a770-4f5a-acc9-4a602937f17e 12015786 2 2020-11-18 08:14:48 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-11-18 08:14:52 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-11-18 08:14:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x400060b4c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-11-18 08:14:52 +0000 UTC,LastTransitionTime:2020-11-18 08:14:52 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-f79dd4667" is progressing.,LastUpdateTime:2020-11-18 08:14:52 +0000 UTC,LastTransitionTime:2020-11-18 08:14:48 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Nov 18 08:14:53.137: INFO: New ReplicaSet "test-recreate-deployment-f79dd4667" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-f79dd4667 deployment-5260 /apis/apps/v1/namespaces/deployment-5260/replicasets/test-recreate-deployment-f79dd4667 9b168e9c-cff1-4db2-a593-8d3abefb1c66 12015784 1 2020-11-18 08:14:52 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 6e1abedb-a770-4f5a-acc9-4a602937f17e 0x400060bda0 0x400060bda1}] [] [{kube-controller-manager Update apps/v1 2020-11-18 08:14:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e1abedb-a770-4f5a-acc9-4a602937f17e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: f79dd4667,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x400060be38 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Nov 18 08:14:53.137: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Nov 18 08:14:53.139: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-c96cf48f deployment-5260 /apis/apps/v1/namespaces/deployment-5260/replicasets/test-recreate-deployment-c96cf48f 1110756c-90f1-41fb-87ee-ad7782157db3 12015776 2 2020-11-18 08:14:48 +0000 UTC map[name:sample-pod-3 pod-template-hash:c96cf48f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 6e1abedb-a770-4f5a-acc9-4a602937f17e 0x400060bc2f 0x400060bc60}] [] [{kube-controller-manager Update apps/v1 2020-11-18 08:14:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6e1abedb-a770-4f5a-acc9-4a602937f17e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: c96cf48f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:c96cf48f] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x400060bcd8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Nov 18 08:14:53.147: INFO: Pod "test-recreate-deployment-f79dd4667-vxfwm" is not available: &Pod{ObjectMeta:{test-recreate-deployment-f79dd4667-vxfwm test-recreate-deployment-f79dd4667- deployment-5260 /api/v1/namespaces/deployment-5260/pods/test-recreate-deployment-f79dd4667-vxfwm f432f181-bef5-4d3b-9ec3-046544f59fbd 12015787 0 2020-11-18 08:14:52 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [{apps/v1 ReplicaSet test-recreate-deployment-f79dd4667 9b168e9c-cff1-4db2-a593-8d3abefb1c66 0x4004578330 0x4004578331}] [] [{kube-controller-manager Update v1 2020-11-18 08:14:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b168e9c-cff1-4db2-a593-8d3abefb1c66\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-11-18 08:14:53 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-69t2p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-69t2p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-69t2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 08:14:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 08:14:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 08:14:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-11-18 08:14:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.17,PodIP:,StartTime:2020-11-18 08:14:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 08:14:53.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5260" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":303,"completed":293,"skipped":4769,"failed":0} SSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 08:14:53.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Nov 18 08:14:53.328: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8335 /api/v1/namespaces/watch-8335/configmaps/e2e-watch-test-watch-closed eca4087a-0f81-4ce3-a02d-832d8663ba9b 12015795 0 2020-11-18 08:14:53 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-11-18 08:14:53 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Nov 18 08:14:53.330: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8335 /api/v1/namespaces/watch-8335/configmaps/e2e-watch-test-watch-closed eca4087a-0f81-4ce3-a02d-832d8663ba9b 12015796 0 2020-11-18 08:14:53 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-11-18 08:14:53 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Nov 18 08:14:53.558: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8335 /api/v1/namespaces/watch-8335/configmaps/e2e-watch-test-watch-closed eca4087a-0f81-4ce3-a02d-832d8663ba9b 12015797 0 2020-11-18 08:14:53 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-11-18 08:14:53 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Nov 18 08:14:53.560: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8335 /api/v1/namespaces/watch-8335/configmaps/e2e-watch-test-watch-closed eca4087a-0f81-4ce3-a02d-832d8663ba9b 12015798 0 2020-11-18 08:14:53 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-11-18 08:14:53 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 08:14:53.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8335" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":303,"completed":294,"skipped":4772,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 08:14:53.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Nov 18 08:15:01.983: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Nov 18 08:15:02.030: INFO: Pod pod-with-prestop-exec-hook still exists Nov 18 08:15:04.030: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Nov 18 08:15:04.039: INFO: Pod pod-with-prestop-exec-hook still exists Nov 18 08:15:06.030: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Nov 18 08:15:06.039: INFO: Pod pod-with-prestop-exec-hook still exists Nov 18 08:15:08.030: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Nov 18 08:15:08.039: INFO: Pod pod-with-prestop-exec-hook still exists Nov 18 08:15:10.030: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Nov 18 08:15:10.037: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 08:15:10.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5727" for this suite. • [SLOW TEST:16.442 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":303,"completed":295,"skipped":4782,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 08:15:10.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 08:15:27.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2601" for this suite. • [SLOW TEST:17.258 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":303,"completed":296,"skipped":4812,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 08:15:27.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Nov 18 08:17:27.466: INFO: Deleting pod "var-expansion-41b04b96-4597-4f41-ab0c-e6ea35775d8d" in namespace "var-expansion-7996" Nov 18 08:17:27.473: INFO: Wait up to 5m0s for pod "var-expansion-41b04b96-4597-4f41-ab0c-e6ea35775d8d" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 08:17:31.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7996" for this suite. • [SLOW TEST:124.190 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":303,"completed":297,"skipped":4833,"failed":0} SSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 08:17:31.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Nov 18 08:17:31.618: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 08:17:39.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9000" for this suite. • [SLOW TEST:8.465 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":303,"completed":298,"skipped":4840,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 08:17:39.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Nov 18 08:17:48.152: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Nov 18 08:17:48.170: INFO: Pod pod-with-poststart-exec-hook still exists Nov 18 08:17:50.171: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Nov 18 08:17:50.179: INFO: Pod pod-with-poststart-exec-hook still exists Nov 18 08:17:52.171: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Nov 18 08:17:52.176: INFO: Pod pod-with-poststart-exec-hook still exists Nov 18 08:17:54.171: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Nov 18 08:17:54.179: INFO: Pod pod-with-poststart-exec-hook still exists Nov 18 08:17:56.171: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Nov 18 08:17:56.179: INFO: Pod pod-with-poststart-exec-hook still exists Nov 18 08:17:58.171: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Nov 18 08:17:58.181: INFO: Pod pod-with-poststart-exec-hook still exists Nov 18 08:18:00.171: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Nov 18 08:18:00.395: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 08:18:00.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4547" for this suite. • [SLOW TEST:20.454 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":303,"completed":299,"skipped":4855,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 08:18:00.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if kubectl can dry-run update Pods [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Nov 18 08:18:00.587: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-2416' Nov 18 08:18:02.031: INFO: stderr: "" Nov 18 08:18:02.031: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: replace the image in the pod with server-side dry-run Nov 18 08:18:02.032: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod -o json --namespace=kubectl-2416' Nov 18 08:18:03.450: INFO: stderr: "" Nov 18 08:18:03.450: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-11-18T08:18:01Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl-run\",\n \"operation\": \"Update\",\n \"time\": \"2020-11-18T08:18:01Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:message\": {},\n \"f:reason\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:message\": {},\n \"f:reason\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2020-11-18T08:18:01Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-2416\",\n \"resourceVersion\": \"12016549\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-2416/pods/e2e-test-httpd-pod\",\n \"uid\": \"20709505-1a8a-475e-ac04-05f7a2108539\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-5gcxv\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"leguer-worker2\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-5gcxv\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-5gcxv\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-11-18T08:18:01Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-11-18T08:18:01Z\",\n \"message\": \"containers with unready status: [e2e-test-httpd-pod]\",\n \"reason\": \"ContainersNotReady\",\n \"status\": \"False\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-11-18T08:18:01Z\",\n \"message\": \"containers with unready status: [e2e-test-httpd-pod]\",\n \"reason\": \"ContainersNotReady\",\n \"status\": \"False\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-11-18T08:18:01Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": false,\n \"restartCount\": 0,\n \"started\": false,\n \"state\": {\n \"waiting\": {\n \"reason\": \"ContainerCreating\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.17\",\n \"phase\": \"Pending\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-11-18T08:18:01Z\"\n }\n}\n" Nov 18 08:18:03.454: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config replace -f - --dry-run server --namespace=kubectl-2416' Nov 18 08:18:06.579: INFO: stderr: "W1118 08:18:04.501369 4351 helpers.go:553] --dry-run is deprecated and can be replaced with --dry-run=client.\n" Nov 18 08:18:06.579: INFO: stdout: "pod/e2e-test-httpd-pod replaced (dry run)\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/httpd:2.4.38-alpine Nov 18 08:18:06.585: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-2416' Nov 18 08:18:10.029: INFO: stderr: "" Nov 18 08:18:10.030: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 08:18:10.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2416" for this suite. • [SLOW TEST:9.603 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl server-side dry-run /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:919 should check if kubectl can dry-run update Pods [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":303,"completed":300,"skipped":4858,"failed":0} SSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 08:18:10.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Nov 18 08:18:10.143: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7e21ee23-7bd6-423c-a115-8b21b1877f78" in namespace "downward-api-242" to be "Succeeded or Failed" Nov 18 08:18:10.154: INFO: Pod "downwardapi-volume-7e21ee23-7bd6-423c-a115-8b21b1877f78": Phase="Pending", Reason="", readiness=false. Elapsed: 10.497987ms Nov 18 08:18:12.163: INFO: Pod "downwardapi-volume-7e21ee23-7bd6-423c-a115-8b21b1877f78": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018969612s Nov 18 08:18:14.168: INFO: Pod "downwardapi-volume-7e21ee23-7bd6-423c-a115-8b21b1877f78": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024634619s STEP: Saw pod success Nov 18 08:18:14.169: INFO: Pod "downwardapi-volume-7e21ee23-7bd6-423c-a115-8b21b1877f78" satisfied condition "Succeeded or Failed" Nov 18 08:18:14.172: INFO: Trying to get logs from node leguer-worker2 pod downwardapi-volume-7e21ee23-7bd6-423c-a115-8b21b1877f78 container client-container: STEP: delete the pod Nov 18 08:18:14.403: INFO: Waiting for pod downwardapi-volume-7e21ee23-7bd6-423c-a115-8b21b1877f78 to disappear Nov 18 08:18:14.410: INFO: Pod downwardapi-volume-7e21ee23-7bd6-423c-a115-8b21b1877f78 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 08:18:14.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-242" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":303,"completed":301,"skipped":4862,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 08:18:14.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Nov 18 08:18:14.545: INFO: Waiting up to 5m0s for pod "downwardapi-volume-83e06879-c5a9-4cf1-8abe-61fcfdc37bed" in namespace "downward-api-1650" to be "Succeeded or Failed" Nov 18 08:18:14.555: INFO: Pod "downwardapi-volume-83e06879-c5a9-4cf1-8abe-61fcfdc37bed": Phase="Pending", Reason="", readiness=false. Elapsed: 9.92914ms Nov 18 08:18:16.620: INFO: Pod "downwardapi-volume-83e06879-c5a9-4cf1-8abe-61fcfdc37bed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075097874s Nov 18 08:18:18.628: INFO: Pod "downwardapi-volume-83e06879-c5a9-4cf1-8abe-61fcfdc37bed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.083268394s STEP: Saw pod success Nov 18 08:18:18.628: INFO: Pod "downwardapi-volume-83e06879-c5a9-4cf1-8abe-61fcfdc37bed" satisfied condition "Succeeded or Failed" Nov 18 08:18:18.634: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-83e06879-c5a9-4cf1-8abe-61fcfdc37bed container client-container: STEP: delete the pod Nov 18 08:18:18.698: INFO: Waiting for pod downwardapi-volume-83e06879-c5a9-4cf1-8abe-61fcfdc37bed to disappear Nov 18 08:18:18.710: INFO: Pod downwardapi-volume-83e06879-c5a9-4cf1-8abe-61fcfdc37bed no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 08:18:18.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1650" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":302,"skipped":4864,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Nov 18 08:18:18.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Nov 18 08:18:34.960: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4479 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 18 08:18:34.960: INFO: >>> kubeConfig: /root/.kube/config I1118 08:18:35.017339 10 log.go:181] (0x40022fe370) (0x40046655e0) Create stream I1118 08:18:35.017502 10 log.go:181] (0x40022fe370) (0x40046655e0) Stream added, broadcasting: 1 I1118 08:18:35.020822 10 log.go:181] (0x40022fe370) Reply frame received for 1 I1118 08:18:35.021043 10 log.go:181] (0x40022fe370) (0x4004665680) Create stream I1118 08:18:35.021139 10 log.go:181] (0x40022fe370) (0x4004665680) Stream added, broadcasting: 3 I1118 08:18:35.022400 10 log.go:181] (0x40022fe370) Reply frame received for 3 I1118 08:18:35.022602 10 log.go:181] (0x40022fe370) (0x4004310280) Create stream I1118 08:18:35.022746 10 log.go:181] (0x40022fe370) (0x4004310280) Stream added, broadcasting: 5 I1118 08:18:35.024309 10 log.go:181] (0x40022fe370) Reply frame received for 5 I1118 08:18:35.107145 10 log.go:181] (0x40022fe370) Data frame received for 3 I1118 08:18:35.107384 10 log.go:181] (0x4004665680) (3) Data frame handling I1118 08:18:35.107524 10 log.go:181] (0x4004665680) (3) Data frame sent I1118 08:18:35.107634 10 log.go:181] (0x40022fe370) Data frame received for 3 I1118 08:18:35.107733 10 log.go:181] (0x4004665680) (3) Data frame handling I1118 08:18:35.107901 10 log.go:181] (0x40022fe370) Data frame received for 5 I1118 08:18:35.108020 10 log.go:181] (0x4004310280) (5) Data frame handling I1118 08:18:35.108597 10 log.go:181] (0x40022fe370) Data frame received for 1 I1118 08:18:35.108744 10 log.go:181] (0x40046655e0) (1) Data frame handling I1118 08:18:35.108996 10 log.go:181] (0x40046655e0) (1) Data frame sent I1118 08:18:35.109124 10 log.go:181] (0x40022fe370) (0x40046655e0) Stream removed, broadcasting: 1 I1118 08:18:35.109272 10 log.go:181] (0x40022fe370) Go away received I1118 08:18:35.109717 10 log.go:181] (0x40022fe370) (0x40046655e0) Stream removed, broadcasting: 1 I1118 08:18:35.109868 10 log.go:181] (0x40022fe370) (0x4004665680) Stream removed, broadcasting: 3 I1118 08:18:35.109983 10 log.go:181] (0x40022fe370) (0x4004310280) Stream removed, broadcasting: 5 Nov 18 08:18:35.110: INFO: Exec stderr: "" Nov 18 08:18:35.110: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4479 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 18 08:18:35.110: INFO: >>> kubeConfig: /root/.kube/config I1118 08:18:35.181453 10 log.go:181] (0x40022fe630) (0x4004665900) Create stream I1118 08:18:35.181627 10 log.go:181] (0x40022fe630) (0x4004665900) Stream added, broadcasting: 1 I1118 08:18:35.186216 10 log.go:181] (0x40022fe630) Reply frame received for 1 I1118 08:18:35.186411 10 log.go:181] (0x40022fe630) (0x4004310320) Create stream I1118 08:18:35.186516 10 log.go:181] (0x40022fe630) (0x4004310320) Stream added, broadcasting: 3 I1118 08:18:35.187851 10 log.go:181] (0x40022fe630) Reply frame received for 3 I1118 08:18:35.188002 10 log.go:181] (0x40022fe630) (0x40043103c0) Create stream I1118 08:18:35.188080 10 log.go:181] (0x40022fe630) (0x40043103c0) Stream added, broadcasting: 5 I1118 08:18:35.189506 10 log.go:181] (0x40022fe630) Reply frame received for 5 I1118 08:18:35.263058 10 log.go:181] (0x40022fe630) Data frame received for 3 I1118 08:18:35.263301 10 log.go:181] (0x4004310320) (3) Data frame handling I1118 08:18:35.263455 10 log.go:181] (0x40022fe630) Data frame received for 5 I1118 08:18:35.263750 10 log.go:181] (0x40043103c0) (5) Data frame handling I1118 08:18:35.264026 10 log.go:181] (0x4004310320) (3) Data frame sent I1118 08:18:35.264204 10 log.go:181] (0x40022fe630) Data frame received for 3 I1118 08:18:35.264348 10 log.go:181] (0x4004310320) (3) Data frame handling I1118 08:18:35.268038 10 log.go:181] (0x40022fe630) Data frame received for 1 I1118 08:18:35.268124 10 log.go:181] (0x4004665900) (1) Data frame handling I1118 08:18:35.268210 10 log.go:181] (0x4004665900) (1) Data frame sent I1118 08:18:35.268301 10 log.go:181] (0x40022fe630) (0x4004665900) Stream removed, broadcasting: 1 I1118 08:18:35.268517 10 log.go:181] (0x40022fe630) Go away received I1118 08:18:35.268938 10 log.go:181] (0x40022fe630) (0x4004665900) Stream removed, broadcasting: 1 I1118 08:18:35.269134 10 log.go:181] (0x40022fe630) (0x4004310320) Stream removed, broadcasting: 3 I1118 08:18:35.269247 10 log.go:181] (0x40022fe630) (0x40043103c0) Stream removed, broadcasting: 5 Nov 18 08:18:35.269: INFO: Exec stderr: "" Nov 18 08:18:35.269: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4479 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 18 08:18:35.269: INFO: >>> kubeConfig: /root/.kube/config I1118 08:18:35.332781 10 log.go:181] (0x400096be40) (0x400332cbe0) Create stream I1118 08:18:35.333145 10 log.go:181] (0x400096be40) (0x400332cbe0) Stream added, broadcasting: 1 I1118 08:18:35.337805 10 log.go:181] (0x400096be40) Reply frame received for 1 I1118 08:18:35.337999 10 log.go:181] (0x400096be40) (0x400332cc80) Create stream I1118 08:18:35.338124 10 log.go:181] (0x400096be40) (0x400332cc80) Stream added, broadcasting: 3 I1118 08:18:35.339663 10 log.go:181] (0x400096be40) Reply frame received for 3 I1118 08:18:35.339808 10 log.go:181] (0x400096be40) (0x400259be00) Create stream I1118 08:18:35.339890 10 log.go:181] (0x400096be40) (0x400259be00) Stream added, broadcasting: 5 I1118 08:18:35.341432 10 log.go:181] (0x400096be40) Reply frame received for 5 I1118 08:18:35.421711 10 log.go:181] (0x400096be40) Data frame received for 3 I1118 08:18:35.421943 10 log.go:181] (0x400332cc80) (3) Data frame handling I1118 08:18:35.422161 10 log.go:181] (0x400096be40) Data frame received for 5 I1118 08:18:35.422378 10 log.go:181] (0x400259be00) (5) Data frame handling I1118 08:18:35.422568 10 log.go:181] (0x400332cc80) (3) Data frame sent I1118 08:18:35.422700 10 log.go:181] (0x400096be40) Data frame received for 1 I1118 08:18:35.422843 10 log.go:181] (0x400332cbe0) (1) Data frame handling I1118 08:18:35.422951 10 log.go:181] (0x400096be40) Data frame received for 3 I1118 08:18:35.423110 10 log.go:181] (0x400332cc80) (3) Data frame handling I1118 08:18:35.423268 10 log.go:181] (0x400332cbe0) (1) Data frame sent I1118 08:18:35.423402 10 log.go:181] (0x400096be40) (0x400332cbe0) Stream removed, broadcasting: 1 I1118 08:18:35.423543 10 log.go:181] (0x400096be40) Go away received I1118 08:18:35.424021 10 log.go:181] (0x400096be40) (0x400332cbe0) Stream removed, broadcasting: 1 I1118 08:18:35.424185 10 log.go:181] (0x400096be40) (0x400332cc80) Stream removed, broadcasting: 3 I1118 08:18:35.424333 10 log.go:181] (0x400096be40) (0x400259be00) Stream removed, broadcasting: 5 Nov 18 08:18:35.424: INFO: Exec stderr: "" Nov 18 08:18:35.424: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4479 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 18 08:18:35.424: INFO: >>> kubeConfig: /root/.kube/config I1118 08:18:35.482686 10 log.go:181] (0x4001b80b00) (0x4002cbc820) Create stream I1118 08:18:35.482822 10 log.go:181] (0x4001b80b00) (0x4002cbc820) Stream added, broadcasting: 1 I1118 08:18:35.486698 10 log.go:181] (0x4001b80b00) Reply frame received for 1 I1118 08:18:35.486844 10 log.go:181] (0x4001b80b00) (0x4003181680) Create stream I1118 08:18:35.486911 10 log.go:181] (0x4001b80b00) (0x4003181680) Stream added, broadcasting: 3 I1118 08:18:35.488025 10 log.go:181] (0x4001b80b00) Reply frame received for 3 I1118 08:18:35.488133 10 log.go:181] (0x4001b80b00) (0x4002cbc8c0) Create stream I1118 08:18:35.488202 10 log.go:181] (0x4001b80b00) (0x4002cbc8c0) Stream added, broadcasting: 5 I1118 08:18:35.489255 10 log.go:181] (0x4001b80b00) Reply frame received for 5 I1118 08:18:35.553420 10 log.go:181] (0x4001b80b00) Data frame received for 5 I1118 08:18:35.553551 10 log.go:181] (0x4002cbc8c0) (5) Data frame handling I1118 08:18:35.553697 10 log.go:181] (0x4001b80b00) Data frame received for 3 I1118 08:18:35.553789 10 log.go:181] (0x4003181680) (3) Data frame handling I1118 08:18:35.553864 10 log.go:181] (0x4003181680) (3) Data frame sent I1118 08:18:35.553928 10 log.go:181] (0x4001b80b00) Data frame received for 3 I1118 08:18:35.553992 10 log.go:181] (0x4003181680) (3) Data frame handling I1118 08:18:35.554967 10 log.go:181] (0x4001b80b00) Data frame received for 1 I1118 08:18:35.555141 10 log.go:181] (0x4002cbc820) (1) Data frame handling I1118 08:18:35.555283 10 log.go:181] (0x4002cbc820) (1) Data frame sent I1118 08:18:35.555447 10 log.go:181] (0x4001b80b00) (0x4002cbc820) Stream removed, broadcasting: 1 I1118 08:18:35.555653 10 log.go:181] (0x4001b80b00) Go away received I1118 08:18:35.556072 10 log.go:181] (0x4001b80b00) (0x4002cbc820) Stream removed, broadcasting: 1 I1118 08:18:35.556241 10 log.go:181] (0x4001b80b00) (0x4003181680) Stream removed, broadcasting: 3 I1118 08:18:35.556394 10 log.go:181] (0x4001b80b00) (0x4002cbc8c0) Stream removed, broadcasting: 5 Nov 18 08:18:35.556: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Nov 18 08:18:35.556: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4479 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 18 08:18:35.557: INFO: >>> kubeConfig: /root/.kube/config I1118 08:18:35.618384 10 log.go:181] (0x40029a42c0) (0x40046bb900) Create stream I1118 08:18:35.618663 10 log.go:181] (0x40029a42c0) (0x40046bb900) Stream added, broadcasting: 1 I1118 08:18:35.623512 10 log.go:181] (0x40029a42c0) Reply frame received for 1 I1118 08:18:35.623698 10 log.go:181] (0x40029a42c0) (0x40046bb9a0) Create stream I1118 08:18:35.623764 10 log.go:181] (0x40029a42c0) (0x40046bb9a0) Stream added, broadcasting: 3 I1118 08:18:35.625060 10 log.go:181] (0x40029a42c0) Reply frame received for 3 I1118 08:18:35.625187 10 log.go:181] (0x40029a42c0) (0x40046659a0) Create stream I1118 08:18:35.625250 10 log.go:181] (0x40029a42c0) (0x40046659a0) Stream added, broadcasting: 5 I1118 08:18:35.626418 10 log.go:181] (0x40029a42c0) Reply frame received for 5 I1118 08:18:35.710013 10 log.go:181] (0x40029a42c0) Data frame received for 3 I1118 08:18:35.710162 10 log.go:181] (0x40046bb9a0) (3) Data frame handling I1118 08:18:35.710237 10 log.go:181] (0x40046bb9a0) (3) Data frame sent I1118 08:18:35.710301 10 log.go:181] (0x40029a42c0) Data frame received for 3 I1118 08:18:35.710369 10 log.go:181] (0x40046bb9a0) (3) Data frame handling I1118 08:18:35.710470 10 log.go:181] (0x40029a42c0) Data frame received for 5 I1118 08:18:35.710576 10 log.go:181] (0x40046659a0) (5) Data frame handling I1118 08:18:35.711188 10 log.go:181] (0x40029a42c0) Data frame received for 1 I1118 08:18:35.711297 10 log.go:181] (0x40046bb900) (1) Data frame handling I1118 08:18:35.711404 10 log.go:181] (0x40046bb900) (1) Data frame sent I1118 08:18:35.711496 10 log.go:181] (0x40029a42c0) (0x40046bb900) Stream removed, broadcasting: 1 I1118 08:18:35.711597 10 log.go:181] (0x40029a42c0) Go away received I1118 08:18:35.712005 10 log.go:181] (0x40029a42c0) (0x40046bb900) Stream removed, broadcasting: 1 I1118 08:18:35.712123 10 log.go:181] (0x40029a42c0) (0x40046bb9a0) Stream removed, broadcasting: 3 I1118 08:18:35.712219 10 log.go:181] (0x40029a42c0) (0x40046659a0) Stream removed, broadcasting: 5 Nov 18 08:18:35.712: INFO: Exec stderr: "" Nov 18 08:18:35.712: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4479 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 18 08:18:35.712: INFO: >>> kubeConfig: /root/.kube/config I1118 08:18:35.769404 10 log.go:181] (0x40028dc000) (0x400332ce60) Create stream I1118 08:18:35.769537 10 log.go:181] (0x40028dc000) (0x400332ce60) Stream added, broadcasting: 1 I1118 08:18:35.773679 10 log.go:181] (0x40028dc000) Reply frame received for 1 I1118 08:18:35.773912 10 log.go:181] (0x40028dc000) (0x4002cbca00) Create stream I1118 08:18:35.774092 10 log.go:181] (0x40028dc000) (0x4002cbca00) Stream added, broadcasting: 3 I1118 08:18:35.775830 10 log.go:181] (0x40028dc000) Reply frame received for 3 I1118 08:18:35.775996 10 log.go:181] (0x40028dc000) (0x4004310460) Create stream I1118 08:18:35.776087 10 log.go:181] (0x40028dc000) (0x4004310460) Stream added, broadcasting: 5 I1118 08:18:35.777497 10 log.go:181] (0x40028dc000) Reply frame received for 5 I1118 08:18:35.826296 10 log.go:181] (0x40028dc000) Data frame received for 3 I1118 08:18:35.826439 10 log.go:181] (0x4002cbca00) (3) Data frame handling I1118 08:18:35.826518 10 log.go:181] (0x4002cbca00) (3) Data frame sent I1118 08:18:35.826600 10 log.go:181] (0x40028dc000) Data frame received for 3 I1118 08:18:35.826671 10 log.go:181] (0x4002cbca00) (3) Data frame handling I1118 08:18:35.826745 10 log.go:181] (0x40028dc000) Data frame received for 5 I1118 08:18:35.826805 10 log.go:181] (0x4004310460) (5) Data frame handling I1118 08:18:35.827662 10 log.go:181] (0x40028dc000) Data frame received for 1 I1118 08:18:35.827748 10 log.go:181] (0x400332ce60) (1) Data frame handling I1118 08:18:35.827851 10 log.go:181] (0x400332ce60) (1) Data frame sent I1118 08:18:35.827940 10 log.go:181] (0x40028dc000) (0x400332ce60) Stream removed, broadcasting: 1 I1118 08:18:35.828035 10 log.go:181] (0x40028dc000) Go away received I1118 08:18:35.828257 10 log.go:181] (0x40028dc000) (0x400332ce60) Stream removed, broadcasting: 1 I1118 08:18:35.828338 10 log.go:181] (0x40028dc000) (0x4002cbca00) Stream removed, broadcasting: 3 I1118 08:18:35.828427 10 log.go:181] (0x40028dc000) (0x4004310460) Stream removed, broadcasting: 5 Nov 18 08:18:35.828: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Nov 18 08:18:35.828: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4479 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 18 08:18:35.828: INFO: >>> kubeConfig: /root/.kube/config I1118 08:18:35.886450 10 log.go:181] (0x4001b818c0) (0x4002cbd2c0) Create stream I1118 08:18:35.886624 10 log.go:181] (0x4001b818c0) (0x4002cbd2c0) Stream added, broadcasting: 1 I1118 08:18:35.890550 10 log.go:181] (0x4001b818c0) Reply frame received for 1 I1118 08:18:35.890712 10 log.go:181] (0x4001b818c0) (0x4004665a40) Create stream I1118 08:18:35.890792 10 log.go:181] (0x4001b818c0) (0x4004665a40) Stream added, broadcasting: 3 I1118 08:18:35.892235 10 log.go:181] (0x4001b818c0) Reply frame received for 3 I1118 08:18:35.892421 10 log.go:181] (0x4001b818c0) (0x4004310500) Create stream I1118 08:18:35.892513 10 log.go:181] (0x4001b818c0) (0x4004310500) Stream added, broadcasting: 5 I1118 08:18:35.894095 10 log.go:181] (0x4001b818c0) Reply frame received for 5 I1118 08:18:35.976399 10 log.go:181] (0x4001b818c0) Data frame received for 5 I1118 08:18:35.976593 10 log.go:181] (0x4004310500) (5) Data frame handling I1118 08:18:35.976789 10 log.go:181] (0x4001b818c0) Data frame received for 3 I1118 08:18:35.977020 10 log.go:181] (0x4004665a40) (3) Data frame handling I1118 08:18:35.977158 10 log.go:181] (0x4004665a40) (3) Data frame sent I1118 08:18:35.977268 10 log.go:181] (0x4001b818c0) Data frame received for 3 I1118 08:18:35.977385 10 log.go:181] (0x4004665a40) (3) Data frame handling I1118 08:18:35.978422 10 log.go:181] (0x4001b818c0) Data frame received for 1 I1118 08:18:35.978606 10 log.go:181] (0x4002cbd2c0) (1) Data frame handling I1118 08:18:35.978740 10 log.go:181] (0x4002cbd2c0) (1) Data frame sent I1118 08:18:35.978864 10 log.go:181] (0x4001b818c0) (0x4002cbd2c0) Stream removed, broadcasting: 1 I1118 08:18:35.979036 10 log.go:181] (0x4001b818c0) Go away received I1118 08:18:35.979584 10 log.go:181] (0x4001b818c0) (0x4002cbd2c0) Stream removed, broadcasting: 1 I1118 08:18:35.979794 10 log.go:181] (0x4001b818c0) (0x4004665a40) Stream removed, broadcasting: 3 I1118 08:18:35.979912 10 log.go:181] (0x4001b818c0) (0x4004310500) Stream removed, broadcasting: 5 Nov 18 08:18:35.979: INFO: Exec stderr: "" Nov 18 08:18:35.980: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4479 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 18 08:18:35.980: INFO: >>> kubeConfig: /root/.kube/config I1118 08:18:36.041877 10 log.go:181] (0x4002cf4000) (0x4002bd5ea0) Create stream I1118 08:18:36.042102 10 log.go:181] (0x4002cf4000) (0x4002bd5ea0) Stream added, broadcasting: 1 I1118 08:18:36.049720 10 log.go:181] (0x4002cf4000) Reply frame received for 1 I1118 08:18:36.050051 10 log.go:181] (0x4002cf4000) (0x4002bd5f40) Create stream I1118 08:18:36.050165 10 log.go:181] (0x4002cf4000) (0x4002bd5f40) Stream added, broadcasting: 3 I1118 08:18:36.052161 10 log.go:181] (0x4002cf4000) Reply frame received for 3 I1118 08:18:36.052323 10 log.go:181] (0x4002cf4000) (0x4002cbd360) Create stream I1118 08:18:36.052405 10 log.go:181] (0x4002cf4000) (0x4002cbd360) Stream added, broadcasting: 5 I1118 08:18:36.053864 10 log.go:181] (0x4002cf4000) Reply frame received for 5 I1118 08:18:36.113346 10 log.go:181] (0x4002cf4000) Data frame received for 5 I1118 08:18:36.113518 10 log.go:181] (0x4002cbd360) (5) Data frame handling I1118 08:18:36.113692 10 log.go:181] (0x4002cf4000) Data frame received for 3 I1118 08:18:36.113844 10 log.go:181] (0x4002bd5f40) (3) Data frame handling I1118 08:18:36.113995 10 log.go:181] (0x4002bd5f40) (3) Data frame sent I1118 08:18:36.114173 10 log.go:181] (0x4002cf4000) Data frame received for 3 I1118 08:18:36.114308 10 log.go:181] (0x4002bd5f40) (3) Data frame handling I1118 08:18:36.115156 10 log.go:181] (0x4002cf4000) Data frame received for 1 I1118 08:18:36.115264 10 log.go:181] (0x4002bd5ea0) (1) Data frame handling I1118 08:18:36.115415 10 log.go:181] (0x4002bd5ea0) (1) Data frame sent I1118 08:18:36.115568 10 log.go:181] (0x4002cf4000) (0x4002bd5ea0) Stream removed, broadcasting: 1 I1118 08:18:36.115741 10 log.go:181] (0x4002cf4000) Go away received I1118 08:18:36.116101 10 log.go:181] (0x4002cf4000) (0x4002bd5ea0) Stream removed, broadcasting: 1 I1118 08:18:36.116221 10 log.go:181] (0x4002cf4000) (0x4002bd5f40) Stream removed, broadcasting: 3 I1118 08:18:36.116370 10 log.go:181] (0x4002cf4000) (0x4002cbd360) Stream removed, broadcasting: 5 Nov 18 08:18:36.116: INFO: Exec stderr: "" Nov 18 08:18:36.116: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4479 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 18 08:18:36.116: INFO: >>> kubeConfig: /root/.kube/config I1118 08:18:36.183459 10 log.go:181] (0x40029a49a0) (0x40046bbc20) Create stream I1118 08:18:36.183659 10 log.go:181] (0x40029a49a0) (0x40046bbc20) Stream added, broadcasting: 1 I1118 08:18:36.188460 10 log.go:181] (0x40029a49a0) Reply frame received for 1 I1118 08:18:36.188732 10 log.go:181] (0x40029a49a0) (0x400332cf00) Create stream I1118 08:18:36.188995 10 log.go:181] (0x40029a49a0) (0x400332cf00) Stream added, broadcasting: 3 I1118 08:18:36.190496 10 log.go:181] (0x40029a49a0) Reply frame received for 3 I1118 08:18:36.190662 10 log.go:181] (0x40029a49a0) (0x400332cfa0) Create stream I1118 08:18:36.190749 10 log.go:181] (0x40029a49a0) (0x400332cfa0) Stream added, broadcasting: 5 I1118 08:18:36.191969 10 log.go:181] (0x40029a49a0) Reply frame received for 5 I1118 08:18:36.258917 10 log.go:181] (0x40029a49a0) Data frame received for 5 I1118 08:18:36.259097 10 log.go:181] (0x400332cfa0) (5) Data frame handling I1118 08:18:36.259248 10 log.go:181] (0x40029a49a0) Data frame received for 3 I1118 08:18:36.259395 10 log.go:181] (0x400332cf00) (3) Data frame handling I1118 08:18:36.259501 10 log.go:181] (0x400332cf00) (3) Data frame sent I1118 08:18:36.259567 10 log.go:181] (0x40029a49a0) Data frame received for 3 I1118 08:18:36.259635 10 log.go:181] (0x400332cf00) (3) Data frame handling I1118 08:18:36.260213 10 log.go:181] (0x40029a49a0) Data frame received for 1 I1118 08:18:36.260302 10 log.go:181] (0x40046bbc20) (1) Data frame handling I1118 08:18:36.260384 10 log.go:181] (0x40046bbc20) (1) Data frame sent I1118 08:18:36.260474 10 log.go:181] (0x40029a49a0) (0x40046bbc20) Stream removed, broadcasting: 1 I1118 08:18:36.260585 10 log.go:181] (0x40029a49a0) Go away received I1118 08:18:36.260800 10 log.go:181] (0x40029a49a0) (0x40046bbc20) Stream removed, broadcasting: 1 I1118 08:18:36.260968 10 log.go:181] (0x40029a49a0) (0x400332cf00) Stream removed, broadcasting: 3 I1118 08:18:36.261047 10 log.go:181] (0x40029a49a0) (0x400332cfa0) Stream removed, broadcasting: 5 Nov 18 08:18:36.261: INFO: Exec stderr: "" Nov 18 08:18:36.261: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4479 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 18 08:18:36.261: INFO: >>> kubeConfig: /root/.kube/config I1118 08:18:36.317083 10 log.go:181] (0x40022fed10) (0x4004665cc0) Create stream I1118 08:18:36.317256 10 log.go:181] (0x40022fed10) (0x4004665cc0) Stream added, broadcasting: 1 I1118 08:18:36.321807 10 log.go:181] (0x40022fed10) Reply frame received for 1 I1118 08:18:36.322103 10 log.go:181] (0x40022fed10) (0x400033c1e0) Create stream I1118 08:18:36.322236 10 log.go:181] (0x40022fed10) (0x400033c1e0) Stream added, broadcasting: 3 I1118 08:18:36.323815 10 log.go:181] (0x40022fed10) Reply frame received for 3 I1118 08:18:36.323984 10 log.go:181] (0x40022fed10) (0x4004665d60) Create stream I1118 08:18:36.324068 10 log.go:181] (0x40022fed10) (0x4004665d60) Stream added, broadcasting: 5 I1118 08:18:36.325715 10 log.go:181] (0x40022fed10) Reply frame received for 5 I1118 08:18:36.398753 10 log.go:181] (0x40022fed10) Data frame received for 5 I1118 08:18:36.398936 10 log.go:181] (0x4004665d60) (5) Data frame handling I1118 08:18:36.399155 10 log.go:181] (0x40022fed10) Data frame received for 3 I1118 08:18:36.399346 10 log.go:181] (0x400033c1e0) (3) Data frame handling I1118 08:18:36.399517 10 log.go:181] (0x400033c1e0) (3) Data frame sent I1118 08:18:36.399648 10 log.go:181] (0x40022fed10) Data frame received for 3 I1118 08:18:36.399820 10 log.go:181] (0x400033c1e0) (3) Data frame handling I1118 08:18:36.401145 10 log.go:181] (0x40022fed10) Data frame received for 1 I1118 08:18:36.401303 10 log.go:181] (0x4004665cc0) (1) Data frame handling I1118 08:18:36.401453 10 log.go:181] (0x4004665cc0) (1) Data frame sent I1118 08:18:36.401619 10 log.go:181] (0x40022fed10) (0x4004665cc0) Stream removed, broadcasting: 1 I1118 08:18:36.401853 10 log.go:181] (0x40022fed10) Go away received I1118 08:18:36.402099 10 log.go:181] (0x40022fed10) (0x4004665cc0) Stream removed, broadcasting: 1 I1118 08:18:36.402252 10 log.go:181] (0x40022fed10) (0x400033c1e0) Stream removed, broadcasting: 3 I1118 08:18:36.402390 10 log.go:181] (0x40022fed10) (0x4004665d60) Stream removed, broadcasting: 5 Nov 18 08:18:36.402: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Nov 18 08:18:36.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-4479" for this suite. • [SLOW TEST:17.694 seconds] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":303,"skipped":4888,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSNov 18 08:18:36.422: INFO: Running AfterSuite actions on all nodes Nov 18 08:18:36.424: INFO: Running AfterSuite actions on node 1 Nov 18 08:18:36.424: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":303,"completed":303,"skipped":4931,"failed":0} Ran 303 of 5234 Specs in 7208.797 seconds SUCCESS! -- 303 Passed | 0 Failed | 0 Pending | 4931 Skipped PASS