I0113 06:18:24.731781 10 test_context.go:457] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0113 06:18:24.738323 10 e2e.go:129] Starting e2e run "9230875c-c25c-4b62-91bc-9048446d6322" on Ginkgo node 1 {"msg":"Test Suite starting","total":309,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1610518691 - Will randomize all specs Will run 309 of 5667 specs Jan 13 06:18:25.442: INFO: >>> kubeConfig: /root/.kube/config Jan 13 06:18:25.495: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 13 06:18:25.670: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 13 06:18:25.852: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 13 06:18:25.852: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 13 06:18:25.852: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 13 06:18:25.903: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Jan 13 06:18:25.903: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 13 06:18:25.903: INFO: e2e test version: v1.20.1 Jan 13 06:18:25.909: INFO: kube-apiserver version: v1.20.0 Jan 13 06:18:25.910: INFO: >>> kubeConfig: /root/.kube/config Jan 13 06:18:25.938: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:18:25.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected Jan 13 06:18:26.063: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Jan 13 06:18:26.106: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ef882e74-57ca-469b-9d3a-32f1d2fc49af" in namespace "projected-416" to be "Succeeded or Failed" Jan 13 06:18:26.135: INFO: Pod "downwardapi-volume-ef882e74-57ca-469b-9d3a-32f1d2fc49af": Phase="Pending", Reason="", readiness=false. Elapsed: 28.88153ms Jan 13 06:18:28.149: INFO: Pod "downwardapi-volume-ef882e74-57ca-469b-9d3a-32f1d2fc49af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042629786s Jan 13 06:18:30.159: INFO: Pod "downwardapi-volume-ef882e74-57ca-469b-9d3a-32f1d2fc49af": Phase="Running", Reason="", readiness=true. Elapsed: 4.052403156s Jan 13 06:18:32.170: INFO: Pod "downwardapi-volume-ef882e74-57ca-469b-9d3a-32f1d2fc49af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.063884727s STEP: Saw pod success Jan 13 06:18:32.171: INFO: Pod "downwardapi-volume-ef882e74-57ca-469b-9d3a-32f1d2fc49af" satisfied condition "Succeeded or Failed" Jan 13 06:18:32.177: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-ef882e74-57ca-469b-9d3a-32f1d2fc49af container client-container: STEP: delete the pod Jan 13 06:18:32.299: INFO: Waiting for pod downwardapi-volume-ef882e74-57ca-469b-9d3a-32f1d2fc49af to disappear Jan 13 06:18:32.342: INFO: Pod downwardapi-volume-ef882e74-57ca-469b-9d3a-32f1d2fc49af no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:18:32.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-416" for this suite. • [SLOW TEST:6.476 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":309,"completed":1,"skipped":22,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:18:32.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:129 [It] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jan 13 06:18:32.640: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:18:32.651: INFO: Number of nodes with available pods: 0 Jan 13 06:18:32.651: INFO: Node leguer-worker is running more than one daemon pod Jan 13 06:18:33.663: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:18:33.675: INFO: Number of nodes with available pods: 0 Jan 13 06:18:33.675: INFO: Node leguer-worker is running more than one daemon pod Jan 13 06:18:35.460: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:18:35.699: INFO: Number of nodes with available pods: 0 Jan 13 06:18:35.699: INFO: Node leguer-worker is running more than one daemon pod Jan 13 06:18:36.667: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:18:36.866: INFO: Number of nodes with available pods: 0 Jan 13 06:18:36.867: INFO: Node leguer-worker is running more than one daemon pod Jan 13 06:18:37.673: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:18:37.685: INFO: Number of nodes with available pods: 1 Jan 13 06:18:37.685: INFO: Node leguer-worker is running more than one daemon pod Jan 13 06:18:38.661: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:18:38.668: INFO: Number of nodes with available pods: 2 Jan 13 06:18:38.668: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Jan 13 06:18:38.743: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:18:38.751: INFO: Number of nodes with available pods: 1 Jan 13 06:18:38.751: INFO: Node leguer-worker2 is running more than one daemon pod Jan 13 06:18:39.838: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:18:39.844: INFO: Number of nodes with available pods: 1 Jan 13 06:18:39.845: INFO: Node leguer-worker2 is running more than one daemon pod Jan 13 06:18:40.762: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:18:40.769: INFO: Number of nodes with available pods: 1 Jan 13 06:18:40.770: INFO: Node leguer-worker2 is running more than one daemon pod Jan 13 06:18:41.763: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:18:41.768: INFO: Number of nodes with available pods: 1 Jan 13 06:18:41.768: INFO: Node leguer-worker2 is running more than one daemon pod Jan 13 06:18:42.766: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:18:42.771: INFO: Number of nodes with available pods: 1 Jan 13 06:18:42.771: INFO: Node leguer-worker2 is running more than one daemon pod Jan 13 06:18:43.768: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:18:43.776: INFO: Number of nodes with available pods: 1 Jan 13 06:18:43.777: INFO: Node leguer-worker2 is running more than one daemon pod Jan 13 06:18:44.762: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:18:44.770: INFO: Number of nodes with available pods: 1 Jan 13 06:18:44.770: INFO: Node leguer-worker2 is running more than one daemon pod Jan 13 06:18:45.763: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:18:45.769: INFO: Number of nodes with available pods: 1 Jan 13 06:18:45.769: INFO: Node leguer-worker2 is running more than one daemon pod Jan 13 06:18:46.762: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:18:46.768: INFO: Number of nodes with available pods: 1 Jan 13 06:18:46.768: INFO: Node leguer-worker2 is running more than one daemon pod Jan 13 06:18:47.763: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:18:47.769: INFO: Number of nodes with available pods: 1 Jan 13 06:18:47.769: INFO: Node leguer-worker2 is running more than one daemon pod Jan 13 06:18:48.763: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:18:48.770: INFO: Number of nodes with available pods: 1 Jan 13 06:18:48.770: INFO: Node leguer-worker2 is running more than one daemon pod Jan 13 06:18:49.766: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:18:49.772: INFO: Number of nodes with available pods: 1 Jan 13 06:18:49.772: INFO: Node leguer-worker2 is running more than one daemon pod Jan 13 06:18:50.765: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:18:50.773: INFO: Number of nodes with available pods: 1 Jan 13 06:18:50.773: INFO: Node leguer-worker2 is running more than one daemon pod Jan 13 06:18:51.776: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:18:51.783: INFO: Number of nodes with available pods: 1 Jan 13 06:18:51.783: INFO: Node leguer-worker2 is running more than one daemon pod Jan 13 06:18:52.760: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:18:52.765: INFO: Number of nodes with available pods: 1 Jan 13 06:18:52.765: INFO: Node leguer-worker2 is running more than one daemon pod Jan 13 06:18:53.764: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:18:53.770: INFO: Number of nodes with available pods: 1 Jan 13 06:18:53.770: INFO: Node leguer-worker2 is running more than one daemon pod Jan 13 06:18:54.762: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:18:54.769: INFO: Number of nodes with available pods: 1 Jan 13 06:18:54.769: INFO: Node leguer-worker2 is running more than one daemon pod Jan 13 06:18:55.764: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:18:55.769: INFO: Number of nodes with available pods: 1 Jan 13 06:18:55.769: INFO: Node leguer-worker2 is running more than one daemon pod Jan 13 06:18:56.764: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:18:56.770: INFO: Number of nodes with available pods: 1 Jan 13 06:18:56.770: INFO: Node leguer-worker2 is running more than one daemon pod Jan 13 06:18:57.764: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:18:57.770: INFO: Number of nodes with available pods: 1 Jan 13 06:18:57.770: INFO: Node leguer-worker2 is running more than one daemon pod Jan 13 06:18:58.763: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:18:58.772: INFO: Number of nodes with available pods: 1 Jan 13 06:18:58.772: INFO: Node leguer-worker2 is running more than one daemon pod Jan 13 06:18:59.763: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:18:59.775: INFO: Number of nodes with available pods: 1 Jan 13 06:18:59.775: INFO: Node leguer-worker2 is running more than one daemon pod Jan 13 06:19:00.762: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:19:00.769: INFO: Number of nodes with available pods: 1 Jan 13 06:19:00.769: INFO: Node leguer-worker2 is running more than one daemon pod Jan 13 06:19:01.888: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:19:01.895: INFO: Number of nodes with available pods: 1 Jan 13 06:19:01.895: INFO: Node leguer-worker2 is running more than one daemon pod Jan 13 06:19:02.773: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:19:02.813: INFO: Number of nodes with available pods: 1 Jan 13 06:19:02.813: INFO: Node leguer-worker2 is running more than one daemon pod Jan 13 06:19:03.798: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:19:03.804: INFO: Number of nodes with available pods: 2 Jan 13 06:19:03.804: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:95 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8646, will wait for the garbage collector to delete the pods Jan 13 06:19:03.882: INFO: Deleting DaemonSet.extensions daemon-set took: 11.705228ms Jan 13 06:19:04.486: INFO: Terminating DaemonSet.extensions daemon-set pods took: 604.25227ms Jan 13 06:19:59.993: INFO: Number of nodes with available pods: 0 Jan 13 06:19:59.993: INFO: Number of running nodes: 0, number of available pods: 0 Jan 13 06:20:00.021: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"484059"},"items":null} Jan 13 06:20:00.027: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"484059"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:20:00.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8646" for this suite. • [SLOW TEST:87.644 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":309,"completed":2,"skipped":43,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:20:00.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:20:04.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4372" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":309,"completed":3,"skipped":53,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:20:04.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 13 06:20:09.543: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:20:09.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2596" for this suite. • [SLOW TEST:5.382 seconds] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":309,"completed":4,"skipped":74,"failed":0} SSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:20:09.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating the pod Jan 13 06:20:09.708: INFO: PodSpec: initContainers in spec.initContainers Jan 13 06:20:58.598: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-baad6085-db94-44a1-b1e9-46b7231f03dd", GenerateName:"", Namespace:"init-container-2239", SelfLink:"", UID:"2b07470e-ab3c-4c02-82b4-a2a2a179802f", ResourceVersion:"484378", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63746115609, loc:(*time.Location)(0x7089440)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"707184902"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0x4003d48320), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4003d48340)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0x4003d48360), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4003d483c0)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-mft7w", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0x4003d34200), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-mft7w", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-mft7w", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-mft7w", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4003d20508), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"leguer-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40014523f0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0x4003d20590)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0x4003d205b0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0x4003d205b8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0x4003d205bc), PreemptionPolicy:(*v1.PreemptionPolicy)(0x4003d5c0b0), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746115609, loc:(*time.Location)(0x7089440)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746115609, loc:(*time.Location)(0x7089440)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746115609, loc:(*time.Location)(0x7089440)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746115609, loc:(*time.Location)(0x7089440)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.13", PodIP:"10.244.2.148", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.148"}}, StartTime:(*v1.Time)(0x4003d483e0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0x4003d48420), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0x40014524d0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://2b61b8a23841dbaf144b46833e08610d2dc795fc4f0728ce2b73aef0c657afec", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0x4003d48440), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0x4003d48400), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0x4003d2063f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:20:58.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2239" for this suite. • [SLOW TEST:48.992 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":309,"completed":5,"skipped":81,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:20:58.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should support --unix-socket=/path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Starting the proxy Jan 13 06:20:59.021: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7322 proxy --unix-socket=/tmp/kubectl-proxy-unix045907812/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:21:00.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7322" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":309,"completed":6,"skipped":84,"failed":0} S ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:21:00.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating service multi-endpoint-test in namespace services-7312 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7312 to expose endpoints map[] Jan 13 06:21:00.400: INFO: successfully validated that service multi-endpoint-test in namespace services-7312 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-7312 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7312 to expose endpoints map[pod1:[100]] Jan 13 06:21:05.551: INFO: successfully validated that service multi-endpoint-test in namespace services-7312 exposes endpoints map[pod1:[100]] STEP: Creating pod pod2 in namespace services-7312 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7312 to expose endpoints map[pod1:[100] pod2:[101]] Jan 13 06:21:08.706: INFO: successfully validated that service multi-endpoint-test in namespace services-7312 exposes endpoints map[pod1:[100] pod2:[101]] STEP: Deleting pod pod1 in namespace services-7312 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7312 to expose endpoints map[pod2:[101]] Jan 13 06:21:08.809: INFO: successfully validated that service multi-endpoint-test in namespace services-7312 exposes endpoints map[pod2:[101]] STEP: Deleting pod pod2 in namespace services-7312 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7312 to expose endpoints map[] Jan 13 06:21:09.018: INFO: successfully validated that service multi-endpoint-test in namespace services-7312 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:21:09.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7312" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:9.170 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":309,"completed":7,"skipped":85,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:21:09.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: getting the auto-created API token Jan 13 06:21:10.457: INFO: created pod pod-service-account-defaultsa Jan 13 06:21:10.457: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jan 13 06:21:10.472: INFO: created pod pod-service-account-mountsa Jan 13 06:21:10.472: INFO: pod pod-service-account-mountsa service account token volume mount: true Jan 13 06:21:10.490: INFO: created pod pod-service-account-nomountsa Jan 13 06:21:10.490: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jan 13 06:21:10.510: INFO: created pod pod-service-account-defaultsa-mountspec Jan 13 06:21:10.510: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jan 13 06:21:10.527: INFO: created pod pod-service-account-mountsa-mountspec Jan 13 06:21:10.527: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jan 13 06:21:10.585: INFO: created pod pod-service-account-nomountsa-mountspec Jan 13 06:21:10.586: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jan 13 06:21:10.599: INFO: created pod pod-service-account-defaultsa-nomountspec Jan 13 06:21:10.599: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jan 13 06:21:10.630: INFO: created pod pod-service-account-mountsa-nomountspec Jan 13 06:21:10.630: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jan 13 06:21:10.659: INFO: created pod pod-service-account-nomountsa-nomountspec Jan 13 06:21:10.660: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:21:10.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1582" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":309,"completed":8,"skipped":120,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:21:10.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should support proxy with --port 0 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: starting the proxy server Jan 13 06:21:10.947: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-4785 proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:21:12.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4785" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":309,"completed":9,"skipped":149,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:21:12.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating Agnhost RC Jan 13 06:21:12.644: INFO: namespace kubectl-5869 Jan 13 06:21:12.645: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-5869 create -f -' Jan 13 06:21:27.347: INFO: stderr: "" Jan 13 06:21:27.347: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Jan 13 06:21:28.359: INFO: Selector matched 1 pods for map[app:agnhost] Jan 13 06:21:28.360: INFO: Found 0 / 1 Jan 13 06:21:29.358: INFO: Selector matched 1 pods for map[app:agnhost] Jan 13 06:21:29.358: INFO: Found 0 / 1 Jan 13 06:21:30.390: INFO: Selector matched 1 pods for map[app:agnhost] Jan 13 06:21:30.390: INFO: Found 0 / 1 Jan 13 06:21:31.357: INFO: Selector matched 1 pods for map[app:agnhost] Jan 13 06:21:31.357: INFO: Found 0 / 1 Jan 13 06:21:32.357: INFO: Selector matched 1 pods for map[app:agnhost] Jan 13 06:21:32.358: INFO: Found 1 / 1 Jan 13 06:21:32.359: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 13 06:21:32.368: INFO: Selector matched 1 pods for map[app:agnhost] Jan 13 06:21:32.368: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 13 06:21:32.369: INFO: wait on agnhost-primary startup in kubectl-5869 Jan 13 06:21:32.369: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-5869 logs agnhost-primary-8wnw4 agnhost-primary' Jan 13 06:21:33.749: INFO: stderr: "" Jan 13 06:21:33.749: INFO: stdout: "Paused\n" STEP: exposing RC Jan 13 06:21:33.750: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-5869 expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379' Jan 13 06:21:35.201: INFO: stderr: "" Jan 13 06:21:35.201: INFO: stdout: "service/rm2 exposed\n" Jan 13 06:21:35.206: INFO: Service rm2 in namespace kubectl-5869 found. STEP: exposing service Jan 13 06:21:37.237: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-5869 expose service rm2 --name=rm3 --port=2345 --target-port=6379' Jan 13 06:21:38.662: INFO: stderr: "" Jan 13 06:21:38.662: INFO: stdout: "service/rm3 exposed\n" Jan 13 06:21:38.667: INFO: Service rm3 in namespace kubectl-5869 found. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:21:40.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5869" for this suite. • [SLOW TEST:28.242 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1229 should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":309,"completed":10,"skipped":152,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:21:40.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating projection with secret that has name projected-secret-test-map-5fec4342-118c-4b8a-8557-40bd3b20ceb3 STEP: Creating a pod to test consume secrets Jan 13 06:21:40.827: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-af8235ba-4758-4443-8411-802e6d984fb7" in namespace "projected-6472" to be "Succeeded or Failed" Jan 13 06:21:40.864: INFO: Pod "pod-projected-secrets-af8235ba-4758-4443-8411-802e6d984fb7": Phase="Pending", Reason="", readiness=false. Elapsed: 37.070448ms Jan 13 06:21:42.871: INFO: Pod "pod-projected-secrets-af8235ba-4758-4443-8411-802e6d984fb7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044303918s Jan 13 06:21:44.879: INFO: Pod "pod-projected-secrets-af8235ba-4758-4443-8411-802e6d984fb7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05242714s STEP: Saw pod success Jan 13 06:21:44.880: INFO: Pod "pod-projected-secrets-af8235ba-4758-4443-8411-802e6d984fb7" satisfied condition "Succeeded or Failed" Jan 13 06:21:44.885: INFO: Trying to get logs from node leguer-worker2 pod pod-projected-secrets-af8235ba-4758-4443-8411-802e6d984fb7 container projected-secret-volume-test: STEP: delete the pod Jan 13 06:21:45.063: INFO: Waiting for pod pod-projected-secrets-af8235ba-4758-4443-8411-802e6d984fb7 to disappear Jan 13 06:21:45.099: INFO: Pod pod-projected-secrets-af8235ba-4758-4443-8411-802e6d984fb7 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:21:45.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6472" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":11,"skipped":173,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:21:45.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 06:21:51.343: INFO: Waiting up to 5m0s for pod "client-envvars-7c74d9b4-15fe-4291-b181-e79e920722fe" in namespace "pods-7555" to be "Succeeded or Failed" Jan 13 06:21:51.418: INFO: Pod "client-envvars-7c74d9b4-15fe-4291-b181-e79e920722fe": Phase="Pending", Reason="", readiness=false. Elapsed: 75.051887ms Jan 13 06:21:54.404: INFO: Pod "client-envvars-7c74d9b4-15fe-4291-b181-e79e920722fe": Phase="Pending", Reason="", readiness=false. Elapsed: 3.061131607s Jan 13 06:21:56.412: INFO: Pod "client-envvars-7c74d9b4-15fe-4291-b181-e79e920722fe": Phase="Running", Reason="", readiness=true. Elapsed: 5.069160257s Jan 13 06:21:58.421: INFO: Pod "client-envvars-7c74d9b4-15fe-4291-b181-e79e920722fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.077886307s STEP: Saw pod success Jan 13 06:21:58.421: INFO: Pod "client-envvars-7c74d9b4-15fe-4291-b181-e79e920722fe" satisfied condition "Succeeded or Failed" Jan 13 06:21:58.429: INFO: Trying to get logs from node leguer-worker pod client-envvars-7c74d9b4-15fe-4291-b181-e79e920722fe container env3cont: STEP: delete the pod Jan 13 06:21:58.513: INFO: Waiting for pod client-envvars-7c74d9b4-15fe-4291-b181-e79e920722fe to disappear Jan 13 06:21:58.538: INFO: Pod client-envvars-7c74d9b4-15fe-4291-b181-e79e920722fe no longer exists [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:21:58.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7555" for this suite. • [SLOW TEST:13.465 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":309,"completed":12,"skipped":198,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:21:58.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8471.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8471.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8471.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8471.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8471.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8471.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8471.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8471.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8471.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8471.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8471.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8471.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8471.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 202.178.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.178.202_udp@PTR;check="$$(dig +tcp +noall +answer +search 202.178.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.178.202_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8471.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8471.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8471.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8471.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8471.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8471.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8471.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8471.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8471.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8471.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8471.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8471.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8471.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 202.178.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.178.202_udp@PTR;check="$$(dig +tcp +noall +answer +search 202.178.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.178.202_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 13 06:22:07.253: INFO: Unable to read wheezy_udp@dns-test-service.dns-8471.svc.cluster.local from pod dns-8471/dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6: the server could not find the requested resource (get pods dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6) Jan 13 06:22:07.258: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8471.svc.cluster.local from pod dns-8471/dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6: the server could not find the requested resource (get pods dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6) Jan 13 06:22:07.262: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8471.svc.cluster.local from pod dns-8471/dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6: the server could not find the requested resource (get pods dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6) Jan 13 06:22:07.267: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8471.svc.cluster.local from pod dns-8471/dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6: the server could not find the requested resource (get pods dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6) Jan 13 06:22:07.299: INFO: Unable to read jessie_udp@dns-test-service.dns-8471.svc.cluster.local from pod dns-8471/dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6: the server could not find the requested resource (get pods dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6) Jan 13 06:22:07.304: INFO: Unable to read jessie_tcp@dns-test-service.dns-8471.svc.cluster.local from pod dns-8471/dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6: the server could not find the requested resource (get pods dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6) Jan 13 06:22:07.308: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8471.svc.cluster.local from pod dns-8471/dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6: the server could not find the requested resource (get pods dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6) Jan 13 06:22:07.315: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8471.svc.cluster.local from pod dns-8471/dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6: the server could not find the requested resource (get pods dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6) Jan 13 06:22:07.339: INFO: Lookups using dns-8471/dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6 failed for: [wheezy_udp@dns-test-service.dns-8471.svc.cluster.local wheezy_tcp@dns-test-service.dns-8471.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8471.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8471.svc.cluster.local jessie_udp@dns-test-service.dns-8471.svc.cluster.local jessie_tcp@dns-test-service.dns-8471.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8471.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8471.svc.cluster.local] Jan 13 06:22:12.347: INFO: Unable to read wheezy_udp@dns-test-service.dns-8471.svc.cluster.local from pod dns-8471/dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6: the server could not find the requested resource (get pods dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6) Jan 13 06:22:12.350: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8471.svc.cluster.local from pod dns-8471/dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6: the server could not find the requested resource (get pods dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6) Jan 13 06:22:12.354: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8471.svc.cluster.local from pod dns-8471/dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6: the server could not find the requested resource (get pods dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6) Jan 13 06:22:12.357: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8471.svc.cluster.local from pod dns-8471/dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6: the server could not find the requested resource (get pods dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6) Jan 13 06:22:12.384: INFO: Unable to read jessie_udp@dns-test-service.dns-8471.svc.cluster.local from pod dns-8471/dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6: the server could not find the requested resource (get pods dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6) Jan 13 06:22:12.388: INFO: Unable to read jessie_tcp@dns-test-service.dns-8471.svc.cluster.local from pod dns-8471/dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6: the server could not find the requested resource (get pods dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6) Jan 13 06:22:12.393: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8471.svc.cluster.local from pod dns-8471/dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6: the server could not find the requested resource (get pods dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6) Jan 13 06:22:12.397: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8471.svc.cluster.local from pod dns-8471/dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6: the server could not find the requested resource (get pods dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6) Jan 13 06:22:12.421: INFO: Lookups using dns-8471/dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6 failed for: [wheezy_udp@dns-test-service.dns-8471.svc.cluster.local wheezy_tcp@dns-test-service.dns-8471.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8471.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8471.svc.cluster.local jessie_udp@dns-test-service.dns-8471.svc.cluster.local jessie_tcp@dns-test-service.dns-8471.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8471.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8471.svc.cluster.local] Jan 13 06:22:17.348: INFO: Unable to read wheezy_udp@dns-test-service.dns-8471.svc.cluster.local from pod dns-8471/dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6: the server could not find the requested resource (get pods dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6) Jan 13 06:22:17.352: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8471.svc.cluster.local from pod dns-8471/dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6: the server could not find the requested resource (get pods dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6) Jan 13 06:22:17.356: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8471.svc.cluster.local from pod dns-8471/dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6: the server could not find the requested resource (get pods dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6) Jan 13 06:22:17.360: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8471.svc.cluster.local from pod dns-8471/dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6: the server could not find the requested resource (get pods dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6) Jan 13 06:22:17.388: INFO: Unable to read jessie_udp@dns-test-service.dns-8471.svc.cluster.local from pod dns-8471/dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6: the server could not find the requested resource (get pods dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6) Jan 13 06:22:17.391: INFO: Unable to read jessie_tcp@dns-test-service.dns-8471.svc.cluster.local from pod dns-8471/dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6: the server could not find the requested resource (get pods dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6) Jan 13 06:22:17.395: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8471.svc.cluster.local from pod dns-8471/dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6: the server could not find the requested resource (get pods dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6) Jan 13 06:22:17.399: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8471.svc.cluster.local from pod dns-8471/dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6: the server could not find the requested resource (get pods dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6) Jan 13 06:22:17.423: INFO: Lookups using dns-8471/dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6 failed for: [wheezy_udp@dns-test-service.dns-8471.svc.cluster.local wheezy_tcp@dns-test-service.dns-8471.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8471.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8471.svc.cluster.local jessie_udp@dns-test-service.dns-8471.svc.cluster.local jessie_tcp@dns-test-service.dns-8471.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8471.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8471.svc.cluster.local] Jan 13 06:22:22.346: INFO: Unable to read wheezy_udp@dns-test-service.dns-8471.svc.cluster.local from pod dns-8471/dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6: the server could not find the requested resource (get pods dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6) Jan 13 06:22:22.350: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8471.svc.cluster.local from pod dns-8471/dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6: the server could not find the requested resource (get pods dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6) Jan 13 06:22:22.353: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8471.svc.cluster.local from pod dns-8471/dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6: the server could not find the requested resource (get pods dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6) Jan 13 06:22:22.359: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8471.svc.cluster.local from pod dns-8471/dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6: the server could not find the requested resource (get pods dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6) Jan 13 06:22:22.383: INFO: Unable to read jessie_udp@dns-test-service.dns-8471.svc.cluster.local from pod dns-8471/dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6: the server could not find the requested resource (get pods dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6) Jan 13 06:22:22.387: INFO: Unable to read jessie_tcp@dns-test-service.dns-8471.svc.cluster.local from pod dns-8471/dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6: the server could not find the requested resource (get pods dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6) Jan 13 06:22:22.391: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8471.svc.cluster.local from pod dns-8471/dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6: the server could not find the requested resource (get pods dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6) Jan 13 06:22:22.394: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8471.svc.cluster.local from pod dns-8471/dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6: the server could not find the requested resource (get pods dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6) Jan 13 06:22:22.413: INFO: Lookups using dns-8471/dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6 failed for: [wheezy_udp@dns-test-service.dns-8471.svc.cluster.local wheezy_tcp@dns-test-service.dns-8471.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8471.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8471.svc.cluster.local jessie_udp@dns-test-service.dns-8471.svc.cluster.local jessie_tcp@dns-test-service.dns-8471.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8471.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8471.svc.cluster.local] Jan 13 06:22:27.366: INFO: Unable to read wheezy_udp@dns-test-service.dns-8471.svc.cluster.local from pod dns-8471/dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6: the server could not find the requested resource (get pods dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6) Jan 13 06:22:27.369: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8471.svc.cluster.local from pod dns-8471/dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6: the server could not find the requested resource (get pods dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6) Jan 13 06:22:27.372: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8471.svc.cluster.local from pod dns-8471/dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6: the server could not find the requested resource (get pods dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6) Jan 13 06:22:27.375: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8471.svc.cluster.local from pod dns-8471/dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6: the server could not find the requested resource (get pods dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6) Jan 13 06:22:27.398: INFO: Unable to read jessie_udp@dns-test-service.dns-8471.svc.cluster.local from pod dns-8471/dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6: the server could not find the requested resource (get pods dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6) Jan 13 06:22:27.401: INFO: Unable to read jessie_tcp@dns-test-service.dns-8471.svc.cluster.local from pod dns-8471/dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6: the server could not find the requested resource (get pods dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6) Jan 13 06:22:27.435: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8471.svc.cluster.local from pod dns-8471/dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6: the server could not find the requested resource (get pods dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6) Jan 13 06:22:27.440: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8471.svc.cluster.local from pod dns-8471/dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6: the server could not find the requested resource (get pods dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6) Jan 13 06:22:27.462: INFO: Lookups using dns-8471/dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6 failed for: [wheezy_udp@dns-test-service.dns-8471.svc.cluster.local wheezy_tcp@dns-test-service.dns-8471.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8471.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8471.svc.cluster.local jessie_udp@dns-test-service.dns-8471.svc.cluster.local jessie_tcp@dns-test-service.dns-8471.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8471.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8471.svc.cluster.local] Jan 13 06:22:32.347: INFO: Unable to read wheezy_udp@dns-test-service.dns-8471.svc.cluster.local from pod dns-8471/dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6: the server could not find the requested resource (get pods dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6) Jan 13 06:22:32.353: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8471.svc.cluster.local from pod dns-8471/dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6: the server could not find the requested resource (get pods dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6) Jan 13 06:22:32.358: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8471.svc.cluster.local from pod dns-8471/dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6: the server could not find the requested resource (get pods dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6) Jan 13 06:22:32.362: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8471.svc.cluster.local from pod dns-8471/dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6: the server could not find the requested resource (get pods dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6) Jan 13 06:22:32.391: INFO: Unable to read jessie_udp@dns-test-service.dns-8471.svc.cluster.local from pod dns-8471/dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6: the server could not find the requested resource (get pods dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6) Jan 13 06:22:32.398: INFO: Unable to read jessie_tcp@dns-test-service.dns-8471.svc.cluster.local from pod dns-8471/dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6: the server could not find the requested resource (get pods dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6) Jan 13 06:22:32.401: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8471.svc.cluster.local from pod dns-8471/dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6: the server could not find the requested resource (get pods dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6) Jan 13 06:22:32.406: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8471.svc.cluster.local from pod dns-8471/dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6: the server could not find the requested resource (get pods dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6) Jan 13 06:22:32.429: INFO: Lookups using dns-8471/dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6 failed for: [wheezy_udp@dns-test-service.dns-8471.svc.cluster.local wheezy_tcp@dns-test-service.dns-8471.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8471.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8471.svc.cluster.local jessie_udp@dns-test-service.dns-8471.svc.cluster.local jessie_tcp@dns-test-service.dns-8471.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8471.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8471.svc.cluster.local] Jan 13 06:22:37.432: INFO: DNS probes using dns-8471/dns-test-831b38fb-0d5c-4780-ac18-bfd66b534ed6 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:22:38.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8471" for this suite. • [SLOW TEST:39.624 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":309,"completed":13,"skipped":236,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:22:38.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath Jan 13 06:22:42.365: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-1572 PodName:var-expansion-30a22c02-fd43-4a33-9c06-5d456fc735fb ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 06:22:42.365: INFO: >>> kubeConfig: /root/.kube/config I0113 06:22:42.448109 10 log.go:181] (0x4000ca73f0) (0x4001a73540) Create stream I0113 06:22:42.448523 10 log.go:181] (0x4000ca73f0) (0x4001a73540) Stream added, broadcasting: 1 I0113 06:22:42.462729 10 log.go:181] (0x4000ca73f0) Reply frame received for 1 I0113 06:22:42.463265 10 log.go:181] (0x4000ca73f0) (0x4001353860) Create stream I0113 06:22:42.463336 10 log.go:181] (0x4000ca73f0) (0x4001353860) Stream added, broadcasting: 3 I0113 06:22:42.465133 10 log.go:181] (0x4000ca73f0) Reply frame received for 3 I0113 06:22:42.465636 10 log.go:181] (0x4000ca73f0) (0x4001dde000) Create stream I0113 06:22:42.465781 10 log.go:181] (0x4000ca73f0) (0x4001dde000) Stream added, broadcasting: 5 I0113 06:22:42.467101 10 log.go:181] (0x4000ca73f0) Reply frame received for 5 I0113 06:22:42.538317 10 log.go:181] (0x4000ca73f0) Data frame received for 3 I0113 06:22:42.538749 10 log.go:181] (0x4000ca73f0) Data frame received for 5 I0113 06:22:42.539012 10 log.go:181] (0x4001dde000) (5) Data frame handling I0113 06:22:42.539269 10 log.go:181] (0x4001353860) (3) Data frame handling I0113 06:22:42.539532 10 log.go:181] (0x4000ca73f0) Data frame received for 1 I0113 06:22:42.539631 10 log.go:181] (0x4001a73540) (1) Data frame handling I0113 06:22:42.540438 10 log.go:181] (0x4001a73540) (1) Data frame sent I0113 06:22:42.541783 10 log.go:181] (0x4000ca73f0) (0x4001a73540) Stream removed, broadcasting: 1 I0113 06:22:42.543795 10 log.go:181] (0x4000ca73f0) Go away received I0113 06:22:42.545820 10 log.go:181] (0x4000ca73f0) (0x4001a73540) Stream removed, broadcasting: 1 I0113 06:22:42.546531 10 log.go:181] (0x4000ca73f0) (0x4001353860) Stream removed, broadcasting: 3 I0113 06:22:42.546813 10 log.go:181] (0x4000ca73f0) (0x4001dde000) Stream removed, broadcasting: 5 STEP: test for file in mounted path Jan 13 06:22:42.552: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-1572 PodName:var-expansion-30a22c02-fd43-4a33-9c06-5d456fc735fb ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 06:22:42.552: INFO: >>> kubeConfig: /root/.kube/config I0113 06:22:42.615796 10 log.go:181] (0x4000818580) (0x40020506e0) Create stream I0113 06:22:42.616096 10 log.go:181] (0x4000818580) (0x40020506e0) Stream added, broadcasting: 1 I0113 06:22:42.627096 10 log.go:181] (0x4000818580) Reply frame received for 1 I0113 06:22:42.627300 10 log.go:181] (0x4000818580) (0x4001353900) Create stream I0113 06:22:42.627400 10 log.go:181] (0x4000818580) (0x4001353900) Stream added, broadcasting: 3 I0113 06:22:42.629009 10 log.go:181] (0x4000818580) Reply frame received for 3 I0113 06:22:42.629123 10 log.go:181] (0x4000818580) (0x4002050780) Create stream I0113 06:22:42.629183 10 log.go:181] (0x4000818580) (0x4002050780) Stream added, broadcasting: 5 I0113 06:22:42.630356 10 log.go:181] (0x4000818580) Reply frame received for 5 I0113 06:22:42.705283 10 log.go:181] (0x4000818580) Data frame received for 3 I0113 06:22:42.705426 10 log.go:181] (0x4001353900) (3) Data frame handling I0113 06:22:42.705687 10 log.go:181] (0x4000818580) Data frame received for 5 I0113 06:22:42.705893 10 log.go:181] (0x4002050780) (5) Data frame handling I0113 06:22:42.706610 10 log.go:181] (0x4000818580) Data frame received for 1 I0113 06:22:42.706709 10 log.go:181] (0x40020506e0) (1) Data frame handling I0113 06:22:42.706800 10 log.go:181] (0x40020506e0) (1) Data frame sent I0113 06:22:42.706876 10 log.go:181] (0x4000818580) (0x40020506e0) Stream removed, broadcasting: 1 I0113 06:22:42.706956 10 log.go:181] (0x4000818580) Go away received I0113 06:22:42.707463 10 log.go:181] (0x4000818580) (0x40020506e0) Stream removed, broadcasting: 1 I0113 06:22:42.707612 10 log.go:181] (0x4000818580) (0x4001353900) Stream removed, broadcasting: 3 I0113 06:22:42.707721 10 log.go:181] (0x4000818580) (0x4002050780) Stream removed, broadcasting: 5 STEP: updating the annotation value Jan 13 06:22:43.232: INFO: Successfully updated pod "var-expansion-30a22c02-fd43-4a33-9c06-5d456fc735fb" STEP: waiting for annotated pod running STEP: deleting the pod gracefully Jan 13 06:22:43.261: INFO: Deleting pod "var-expansion-30a22c02-fd43-4a33-9c06-5d456fc735fb" in namespace "var-expansion-1572" Jan 13 06:22:43.269: INFO: Wait up to 5m0s for pod "var-expansion-30a22c02-fd43-4a33-9c06-5d456fc735fb" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:24:11.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1572" for this suite. • [SLOW TEST:93.102 seconds] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":309,"completed":14,"skipped":269,"failed":0} S ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:24:11.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Performing setup for networking test in namespace pod-network-test-1849 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 13 06:24:11.390: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jan 13 06:24:11.533: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 13 06:24:13.575: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 13 06:24:15.544: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 13 06:24:17.551: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 13 06:24:19.559: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 13 06:24:21.580: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 13 06:24:23.541: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 13 06:24:25.541: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 13 06:24:27.557: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 13 06:24:29.555: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 13 06:24:31.544: INFO: The status of Pod netserver-0 is Running (Ready = true) Jan 13 06:24:31.560: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jan 13 06:24:35.631: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Jan 13 06:24:35.632: INFO: Going to poll 10.244.2.158 on port 8081 at least 0 times, with a maximum of 34 tries before failing Jan 13 06:24:35.635: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.158 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1849 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 06:24:35.636: INFO: >>> kubeConfig: /root/.kube/config I0113 06:24:35.702188 10 log.go:181] (0x40008afb80) (0x40023b0a00) Create stream I0113 06:24:35.702374 10 log.go:181] (0x40008afb80) (0x40023b0a00) Stream added, broadcasting: 1 I0113 06:24:35.706819 10 log.go:181] (0x40008afb80) Reply frame received for 1 I0113 06:24:35.707185 10 log.go:181] (0x40008afb80) (0x4000f54820) Create stream I0113 06:24:35.707332 10 log.go:181] (0x40008afb80) (0x4000f54820) Stream added, broadcasting: 3 I0113 06:24:35.709457 10 log.go:181] (0x40008afb80) Reply frame received for 3 I0113 06:24:35.709764 10 log.go:181] (0x40008afb80) (0x400308b7c0) Create stream I0113 06:24:35.709926 10 log.go:181] (0x40008afb80) (0x400308b7c0) Stream added, broadcasting: 5 I0113 06:24:35.711889 10 log.go:181] (0x40008afb80) Reply frame received for 5 I0113 06:24:36.783084 10 log.go:181] (0x40008afb80) Data frame received for 3 I0113 06:24:36.783272 10 log.go:181] (0x4000f54820) (3) Data frame handling I0113 06:24:36.783404 10 log.go:181] (0x40008afb80) Data frame received for 5 I0113 06:24:36.783523 10 log.go:181] (0x400308b7c0) (5) Data frame handling I0113 06:24:36.783633 10 log.go:181] (0x4000f54820) (3) Data frame sent I0113 06:24:36.783771 10 log.go:181] (0x40008afb80) Data frame received for 3 I0113 06:24:36.783889 10 log.go:181] (0x4000f54820) (3) Data frame handling I0113 06:24:36.784227 10 log.go:181] (0x40008afb80) Data frame received for 1 I0113 06:24:36.784394 10 log.go:181] (0x40023b0a00) (1) Data frame handling I0113 06:24:36.784517 10 log.go:181] (0x40023b0a00) (1) Data frame sent I0113 06:24:36.784652 10 log.go:181] (0x40008afb80) (0x40023b0a00) Stream removed, broadcasting: 1 I0113 06:24:36.784772 10 log.go:181] (0x40008afb80) Go away received I0113 06:24:36.785436 10 log.go:181] (0x40008afb80) (0x40023b0a00) Stream removed, broadcasting: 1 I0113 06:24:36.785543 10 log.go:181] (0x40008afb80) (0x4000f54820) Stream removed, broadcasting: 3 I0113 06:24:36.785628 10 log.go:181] (0x40008afb80) (0x400308b7c0) Stream removed, broadcasting: 5 Jan 13 06:24:36.786: INFO: Found all 1 expected endpoints: [netserver-0] Jan 13 06:24:36.786: INFO: Going to poll 10.244.1.230 on port 8081 at least 0 times, with a maximum of 34 tries before failing Jan 13 06:24:36.794: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.230 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1849 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 06:24:36.794: INFO: >>> kubeConfig: /root/.kube/config I0113 06:24:36.847872 10 log.go:181] (0x4000818160) (0x4000f54aa0) Create stream I0113 06:24:36.848044 10 log.go:181] (0x4000818160) (0x4000f54aa0) Stream added, broadcasting: 1 I0113 06:24:36.852799 10 log.go:181] (0x4000818160) Reply frame received for 1 I0113 06:24:36.853145 10 log.go:181] (0x4000818160) (0x40023b0d20) Create stream I0113 06:24:36.853284 10 log.go:181] (0x4000818160) (0x40023b0d20) Stream added, broadcasting: 3 I0113 06:24:36.855192 10 log.go:181] (0x4000818160) Reply frame received for 3 I0113 06:24:36.855381 10 log.go:181] (0x4000818160) (0x40011e2000) Create stream I0113 06:24:36.855520 10 log.go:181] (0x4000818160) (0x40011e2000) Stream added, broadcasting: 5 I0113 06:24:36.856724 10 log.go:181] (0x4000818160) Reply frame received for 5 I0113 06:24:37.917681 10 log.go:181] (0x4000818160) Data frame received for 5 I0113 06:24:37.917912 10 log.go:181] (0x40011e2000) (5) Data frame handling I0113 06:24:37.918108 10 log.go:181] (0x4000818160) Data frame received for 3 I0113 06:24:37.918320 10 log.go:181] (0x40023b0d20) (3) Data frame handling I0113 06:24:37.918545 10 log.go:181] (0x40023b0d20) (3) Data frame sent I0113 06:24:37.918728 10 log.go:181] (0x4000818160) Data frame received for 3 I0113 06:24:37.918914 10 log.go:181] (0x40023b0d20) (3) Data frame handling I0113 06:24:37.920199 10 log.go:181] (0x4000818160) Data frame received for 1 I0113 06:24:37.920347 10 log.go:181] (0x4000f54aa0) (1) Data frame handling I0113 06:24:37.920480 10 log.go:181] (0x4000f54aa0) (1) Data frame sent I0113 06:24:37.920602 10 log.go:181] (0x4000818160) (0x4000f54aa0) Stream removed, broadcasting: 1 I0113 06:24:37.920781 10 log.go:181] (0x4000818160) Go away received I0113 06:24:37.921204 10 log.go:181] (0x4000818160) (0x4000f54aa0) Stream removed, broadcasting: 1 I0113 06:24:37.921332 10 log.go:181] (0x4000818160) (0x40023b0d20) Stream removed, broadcasting: 3 I0113 06:24:37.921480 10 log.go:181] (0x4000818160) (0x40011e2000) Stream removed, broadcasting: 5 Jan 13 06:24:37.921: INFO: Found all 1 expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:24:37.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1849" for this suite. • [SLOW TEST:26.623 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:27 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:30 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":15,"skipped":270,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:24:37.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename discovery STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:39 STEP: Setting up server cert [It] should validate PreferredVersion for each APIGroup [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 06:24:41.374: INFO: Checking APIGroup: apiregistration.k8s.io Jan 13 06:24:41.379: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 Jan 13 06:24:41.379: INFO: Versions found [{apiregistration.k8s.io/v1 v1} {apiregistration.k8s.io/v1beta1 v1beta1}] Jan 13 06:24:41.379: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 Jan 13 06:24:41.379: INFO: Checking APIGroup: apps Jan 13 06:24:41.381: INFO: PreferredVersion.GroupVersion: apps/v1 Jan 13 06:24:41.381: INFO: Versions found [{apps/v1 v1}] Jan 13 06:24:41.381: INFO: apps/v1 matches apps/v1 Jan 13 06:24:41.381: INFO: Checking APIGroup: events.k8s.io Jan 13 06:24:41.384: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 Jan 13 06:24:41.384: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] Jan 13 06:24:41.384: INFO: events.k8s.io/v1 matches events.k8s.io/v1 Jan 13 06:24:41.384: INFO: Checking APIGroup: authentication.k8s.io Jan 13 06:24:41.386: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 Jan 13 06:24:41.386: INFO: Versions found [{authentication.k8s.io/v1 v1} {authentication.k8s.io/v1beta1 v1beta1}] Jan 13 06:24:41.386: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 Jan 13 06:24:41.386: INFO: Checking APIGroup: authorization.k8s.io Jan 13 06:24:41.388: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 Jan 13 06:24:41.388: INFO: Versions found [{authorization.k8s.io/v1 v1} {authorization.k8s.io/v1beta1 v1beta1}] Jan 13 06:24:41.388: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 Jan 13 06:24:41.388: INFO: Checking APIGroup: autoscaling Jan 13 06:24:41.390: INFO: PreferredVersion.GroupVersion: autoscaling/v1 Jan 13 06:24:41.390: INFO: Versions found [{autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] Jan 13 06:24:41.390: INFO: autoscaling/v1 matches autoscaling/v1 Jan 13 06:24:41.390: INFO: Checking APIGroup: batch Jan 13 06:24:41.392: INFO: PreferredVersion.GroupVersion: batch/v1 Jan 13 06:24:41.392: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1}] Jan 13 06:24:41.392: INFO: batch/v1 matches batch/v1 Jan 13 06:24:41.392: INFO: Checking APIGroup: certificates.k8s.io Jan 13 06:24:41.394: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 Jan 13 06:24:41.394: INFO: Versions found [{certificates.k8s.io/v1 v1} {certificates.k8s.io/v1beta1 v1beta1}] Jan 13 06:24:41.394: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 Jan 13 06:24:41.394: INFO: Checking APIGroup: networking.k8s.io Jan 13 06:24:41.395: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 Jan 13 06:24:41.395: INFO: Versions found [{networking.k8s.io/v1 v1} {networking.k8s.io/v1beta1 v1beta1}] Jan 13 06:24:41.396: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 Jan 13 06:24:41.396: INFO: Checking APIGroup: extensions Jan 13 06:24:41.397: INFO: PreferredVersion.GroupVersion: extensions/v1beta1 Jan 13 06:24:41.397: INFO: Versions found [{extensions/v1beta1 v1beta1}] Jan 13 06:24:41.397: INFO: extensions/v1beta1 matches extensions/v1beta1 Jan 13 06:24:41.397: INFO: Checking APIGroup: policy Jan 13 06:24:41.399: INFO: PreferredVersion.GroupVersion: policy/v1beta1 Jan 13 06:24:41.399: INFO: Versions found [{policy/v1beta1 v1beta1}] Jan 13 06:24:41.399: INFO: policy/v1beta1 matches policy/v1beta1 Jan 13 06:24:41.400: INFO: Checking APIGroup: rbac.authorization.k8s.io Jan 13 06:24:41.402: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 Jan 13 06:24:41.402: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1} {rbac.authorization.k8s.io/v1beta1 v1beta1}] Jan 13 06:24:41.402: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 Jan 13 06:24:41.402: INFO: Checking APIGroup: storage.k8s.io Jan 13 06:24:41.404: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 Jan 13 06:24:41.404: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] Jan 13 06:24:41.404: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 Jan 13 06:24:41.404: INFO: Checking APIGroup: admissionregistration.k8s.io Jan 13 06:24:41.406: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 Jan 13 06:24:41.406: INFO: Versions found [{admissionregistration.k8s.io/v1 v1} {admissionregistration.k8s.io/v1beta1 v1beta1}] Jan 13 06:24:41.406: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 Jan 13 06:24:41.406: INFO: Checking APIGroup: apiextensions.k8s.io Jan 13 06:24:41.407: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 Jan 13 06:24:41.407: INFO: Versions found [{apiextensions.k8s.io/v1 v1} {apiextensions.k8s.io/v1beta1 v1beta1}] Jan 13 06:24:41.407: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 Jan 13 06:24:41.407: INFO: Checking APIGroup: scheduling.k8s.io Jan 13 06:24:41.409: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 Jan 13 06:24:41.409: INFO: Versions found [{scheduling.k8s.io/v1 v1} {scheduling.k8s.io/v1beta1 v1beta1}] Jan 13 06:24:41.409: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 Jan 13 06:24:41.409: INFO: Checking APIGroup: coordination.k8s.io Jan 13 06:24:41.411: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 Jan 13 06:24:41.411: INFO: Versions found [{coordination.k8s.io/v1 v1} {coordination.k8s.io/v1beta1 v1beta1}] Jan 13 06:24:41.411: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 Jan 13 06:24:41.411: INFO: Checking APIGroup: node.k8s.io Jan 13 06:24:41.413: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1 Jan 13 06:24:41.413: INFO: Versions found [{node.k8s.io/v1 v1} {node.k8s.io/v1beta1 v1beta1}] Jan 13 06:24:41.414: INFO: node.k8s.io/v1 matches node.k8s.io/v1 Jan 13 06:24:41.414: INFO: Checking APIGroup: discovery.k8s.io Jan 13 06:24:41.416: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1beta1 Jan 13 06:24:41.416: INFO: Versions found [{discovery.k8s.io/v1beta1 v1beta1}] Jan 13 06:24:41.416: INFO: discovery.k8s.io/v1beta1 matches discovery.k8s.io/v1beta1 Jan 13 06:24:41.416: INFO: Checking APIGroup: flowcontrol.apiserver.k8s.io Jan 13 06:24:41.418: INFO: PreferredVersion.GroupVersion: flowcontrol.apiserver.k8s.io/v1beta1 Jan 13 06:24:41.418: INFO: Versions found [{flowcontrol.apiserver.k8s.io/v1beta1 v1beta1}] Jan 13 06:24:41.418: INFO: flowcontrol.apiserver.k8s.io/v1beta1 matches flowcontrol.apiserver.k8s.io/v1beta1 Jan 13 06:24:41.418: INFO: Checking APIGroup: pingcap.com Jan 13 06:24:41.420: INFO: PreferredVersion.GroupVersion: pingcap.com/v1alpha1 Jan 13 06:24:41.420: INFO: Versions found [{pingcap.com/v1alpha1 v1alpha1}] Jan 13 06:24:41.420: INFO: pingcap.com/v1alpha1 matches pingcap.com/v1alpha1 [AfterEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:24:41.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "discovery-6303" for this suite. •{"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":309,"completed":16,"skipped":281,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:24:41.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Jan 13 06:24:41.574: INFO: Waiting up to 5m0s for pod "downwardapi-volume-347d24d3-65e6-4569-b2de-e00cf51eba19" in namespace "projected-1136" to be "Succeeded or Failed" Jan 13 06:24:41.589: INFO: Pod "downwardapi-volume-347d24d3-65e6-4569-b2de-e00cf51eba19": Phase="Pending", Reason="", readiness=false. Elapsed: 15.088976ms Jan 13 06:24:43.698: INFO: Pod "downwardapi-volume-347d24d3-65e6-4569-b2de-e00cf51eba19": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124062639s Jan 13 06:24:45.991: INFO: Pod "downwardapi-volume-347d24d3-65e6-4569-b2de-e00cf51eba19": Phase="Running", Reason="", readiness=true. Elapsed: 4.416382856s Jan 13 06:24:47.999: INFO: Pod "downwardapi-volume-347d24d3-65e6-4569-b2de-e00cf51eba19": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.42469172s STEP: Saw pod success Jan 13 06:24:47.999: INFO: Pod "downwardapi-volume-347d24d3-65e6-4569-b2de-e00cf51eba19" satisfied condition "Succeeded or Failed" Jan 13 06:24:48.006: INFO: Trying to get logs from node leguer-worker2 pod downwardapi-volume-347d24d3-65e6-4569-b2de-e00cf51eba19 container client-container: STEP: delete the pod Jan 13 06:24:48.083: INFO: Waiting for pod downwardapi-volume-347d24d3-65e6-4569-b2de-e00cf51eba19 to disappear Jan 13 06:24:48.086: INFO: Pod downwardapi-volume-347d24d3-65e6-4569-b2de-e00cf51eba19 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:24:48.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1136" for this suite. • [SLOW TEST:6.660 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":309,"completed":17,"skipped":289,"failed":0} SS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:24:48.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:24:52.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-2134" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":309,"completed":18,"skipped":291,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:24:52.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name cm-test-opt-del-d95ff16c-643c-4a30-a4d0-36480a72b059 STEP: Creating configMap with name cm-test-opt-upd-cc4e197b-a0a1-4704-ab65-1e2086445499 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-d95ff16c-643c-4a30-a4d0-36480a72b059 STEP: Updating configmap cm-test-opt-upd-cc4e197b-a0a1-4704-ab65-1e2086445499 STEP: Creating configMap with name cm-test-opt-create-49dd2e51-132a-4eea-922a-0fa71402eef4 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:26:31.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3602" for this suite. • [SLOW TEST:99.173 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":309,"completed":19,"skipped":350,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:26:31.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:85 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 06:26:31.709: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jan 13 06:26:31.722: INFO: Pod name sample-pod: Found 0 pods out of 1 Jan 13 06:26:36.731: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 13 06:26:36.732: INFO: Creating deployment "test-rolling-update-deployment" Jan 13 06:26:36.748: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jan 13 06:26:36.803: INFO: deployment "test-rolling-update-deployment" doesn't have the required revision set Jan 13 06:26:38.827: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jan 13 06:26:38.832: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746115996, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746115996, loc:(*time.Location)(0x7089440)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746115996, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746115996, loc:(*time.Location)(0x7089440)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-6b6bf9df46\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 13 06:26:40.990: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:79 Jan 13 06:26:41.051: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-6280 f23716d3-8828-4da7-b802-db7fb8000ac3 485969 1 2021-01-13 06:26:36 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2021-01-13 06:26:36 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-01-13 06:26:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x40008d8438 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-01-13 06:26:36 +0000 UTC,LastTransitionTime:2021-01-13 06:26:36 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-6b6bf9df46" has successfully progressed.,LastUpdateTime:2021-01-13 06:26:40 +0000 UTC,LastTransitionTime:2021-01-13 06:26:36 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jan 13 06:26:41.061: INFO: New ReplicaSet "test-rolling-update-deployment-6b6bf9df46" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-6b6bf9df46 deployment-6280 dbd9b763-a690-497f-9512-5a586c165595 485958 1 2021-01-13 06:26:36 +0000 UTC map[name:sample-pod pod-template-hash:6b6bf9df46] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment f23716d3-8828-4da7-b802-db7fb8000ac3 0x40008d88c7 0x40008d88c8}] [] [{kube-controller-manager Update apps/v1 2021-01-13 06:26:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f23716d3-8828-4da7-b802-db7fb8000ac3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 6b6bf9df46,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:6b6bf9df46] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x40008d8958 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 13 06:26:41.062: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jan 13 06:26:41.062: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-6280 e350903d-9ef8-414c-a007-a57176cc9cc9 485968 2 2021-01-13 06:26:31 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment f23716d3-8828-4da7-b802-db7fb8000ac3 0x40008d87b7 0x40008d87b8}] [] [{e2e.test Update apps/v1 2021-01-13 06:26:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-01-13 06:26:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f23716d3-8828-4da7-b802-db7fb8000ac3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0x40008d8858 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 13 06:26:41.081: INFO: Pod "test-rolling-update-deployment-6b6bf9df46-x9bdz" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-6b6bf9df46-x9bdz test-rolling-update-deployment-6b6bf9df46- deployment-6280 6e6cd16a-2d4c-4975-852c-96dd2ff7e849 485957 0 2021-01-13 06:26:36 +0000 UTC map[name:sample-pod pod-template-hash:6b6bf9df46] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-6b6bf9df46 dbd9b763-a690-497f-9512-5a586c165595 0x40008d8fc7 0x40008d8fc8}] [] [{kube-controller-manager Update v1 2021-01-13 06:26:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dbd9b763-a690-497f-9512-5a586c165595\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-13 06:26:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.165\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-sk46k,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-sk46k,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-sk46k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 06:26:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 06:26:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 06:26:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 06:26:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.165,StartTime:2021-01-13 06:26:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-13 06:26:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a,ContainerID:containerd://79c08c5a48ab0a8be860f80e39d1dc32c4f494230ec4d25d8fc1c2d61b190ec7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.165,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:26:41.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6280" for this suite. • [SLOW TEST:9.546 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":309,"completed":20,"skipped":401,"failed":0} S ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:26:41.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [sig-storage] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test substitution in volume subpath Jan 13 06:26:41.271: INFO: Waiting up to 5m0s for pod "var-expansion-e641ac7d-b792-4e1b-bc25-1a52dafafc2a" in namespace "var-expansion-1180" to be "Succeeded or Failed" Jan 13 06:26:41.312: INFO: Pod "var-expansion-e641ac7d-b792-4e1b-bc25-1a52dafafc2a": Phase="Pending", Reason="", readiness=false. Elapsed: 40.554114ms Jan 13 06:26:43.385: INFO: Pod "var-expansion-e641ac7d-b792-4e1b-bc25-1a52dafafc2a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112991516s Jan 13 06:26:45.393: INFO: Pod "var-expansion-e641ac7d-b792-4e1b-bc25-1a52dafafc2a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.121333556s STEP: Saw pod success Jan 13 06:26:45.393: INFO: Pod "var-expansion-e641ac7d-b792-4e1b-bc25-1a52dafafc2a" satisfied condition "Succeeded or Failed" Jan 13 06:26:45.399: INFO: Trying to get logs from node leguer-worker2 pod var-expansion-e641ac7d-b792-4e1b-bc25-1a52dafafc2a container dapi-container: STEP: delete the pod Jan 13 06:26:45.442: INFO: Waiting for pod var-expansion-e641ac7d-b792-4e1b-bc25-1a52dafafc2a to disappear Jan 13 06:26:45.456: INFO: Pod var-expansion-e641ac7d-b792-4e1b-bc25-1a52dafafc2a no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:26:45.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1180" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":309,"completed":21,"skipped":402,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:26:45.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name configmap-projected-all-test-volume-dd9f32d9-241b-4b38-be0b-c56a1c2d3424 STEP: Creating secret with name secret-projected-all-test-volume-693d1cb0-3427-43c9-b146-27f00d077a3a STEP: Creating a pod to test Check all projections for projected volume plugin Jan 13 06:26:45.652: INFO: Waiting up to 5m0s for pod "projected-volume-8d853e94-d252-419c-8f31-a427bc8e1337" in namespace "projected-671" to be "Succeeded or Failed" Jan 13 06:26:45.684: INFO: Pod "projected-volume-8d853e94-d252-419c-8f31-a427bc8e1337": Phase="Pending", Reason="", readiness=false. Elapsed: 32.027665ms Jan 13 06:26:47.726: INFO: Pod "projected-volume-8d853e94-d252-419c-8f31-a427bc8e1337": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074006744s Jan 13 06:26:49.734: INFO: Pod "projected-volume-8d853e94-d252-419c-8f31-a427bc8e1337": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081861279s Jan 13 06:26:51.739: INFO: Pod "projected-volume-8d853e94-d252-419c-8f31-a427bc8e1337": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.087379643s STEP: Saw pod success Jan 13 06:26:51.739: INFO: Pod "projected-volume-8d853e94-d252-419c-8f31-a427bc8e1337" satisfied condition "Succeeded or Failed" Jan 13 06:26:51.743: INFO: Trying to get logs from node leguer-worker2 pod projected-volume-8d853e94-d252-419c-8f31-a427bc8e1337 container projected-all-volume-test: STEP: delete the pod Jan 13 06:26:51.829: INFO: Waiting for pod projected-volume-8d853e94-d252-419c-8f31-a427bc8e1337 to disappear Jan 13 06:26:51.833: INFO: Pod projected-volume-8d853e94-d252-419c-8f31-a427bc8e1337 no longer exists [AfterEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:26:51.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-671" for this suite. • [SLOW TEST:6.337 seconds] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:32 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":309,"completed":22,"skipped":434,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:26:51.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 13 06:26:53.715: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 13 06:26:55.738: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746116013, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746116013, loc:(*time.Location)(0x7089440)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746116013, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746116013, loc:(*time.Location)(0x7089440)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 13 06:26:57.744: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746116013, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746116013, loc:(*time.Location)(0x7089440)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746116013, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746116013, loc:(*time.Location)(0x7089440)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 13 06:27:00.780: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:27:00.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8653" for this suite. STEP: Destroying namespace "webhook-8653-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:9.217 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":309,"completed":23,"skipped":442,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:27:01.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating secret with name secret-test-3e07833b-9b14-43ec-a05b-79d0d61d27fe STEP: Creating a pod to test consume secrets Jan 13 06:27:01.159: INFO: Waiting up to 5m0s for pod "pod-secrets-a59013c9-288a-478c-a857-136681454cfe" in namespace "secrets-8606" to be "Succeeded or Failed" Jan 13 06:27:01.175: INFO: Pod "pod-secrets-a59013c9-288a-478c-a857-136681454cfe": Phase="Pending", Reason="", readiness=false. Elapsed: 15.289957ms Jan 13 06:27:03.182: INFO: Pod "pod-secrets-a59013c9-288a-478c-a857-136681454cfe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022726794s Jan 13 06:27:05.192: INFO: Pod "pod-secrets-a59013c9-288a-478c-a857-136681454cfe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032617945s STEP: Saw pod success Jan 13 06:27:05.193: INFO: Pod "pod-secrets-a59013c9-288a-478c-a857-136681454cfe" satisfied condition "Succeeded or Failed" Jan 13 06:27:05.197: INFO: Trying to get logs from node leguer-worker2 pod pod-secrets-a59013c9-288a-478c-a857-136681454cfe container secret-volume-test: STEP: delete the pod Jan 13 06:27:05.277: INFO: Waiting for pod pod-secrets-a59013c9-288a-478c-a857-136681454cfe to disappear Jan 13 06:27:05.395: INFO: Pod pod-secrets-a59013c9-288a-478c-a857-136681454cfe no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:27:05.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8606" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":309,"completed":24,"skipped":473,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] server version should find the server version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:27:05.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename server-version STEP: Waiting for a default service account to be provisioned in namespace [It] should find the server version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Request ServerVersion STEP: Confirm major version Jan 13 06:27:05.489: INFO: Major version: 1 STEP: Confirm minor version Jan 13 06:27:05.489: INFO: cleanMinorVersion: 20 Jan 13 06:27:05.489: INFO: Minor version: 20 [AfterEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:27:05.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "server-version-1255" for this suite. •{"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":309,"completed":25,"skipped":487,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:27:05.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name projected-configmap-test-volume-map-f84278c7-9cbb-48df-967e-84e7825e9e1e STEP: Creating a pod to test consume configMaps Jan 13 06:27:05.696: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3ca6fe52-ef7c-40e8-9164-a4ccfb2e5742" in namespace "projected-2475" to be "Succeeded or Failed" Jan 13 06:27:05.717: INFO: Pod "pod-projected-configmaps-3ca6fe52-ef7c-40e8-9164-a4ccfb2e5742": Phase="Pending", Reason="", readiness=false. Elapsed: 20.601655ms Jan 13 06:27:07.724: INFO: Pod "pod-projected-configmaps-3ca6fe52-ef7c-40e8-9164-a4ccfb2e5742": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027620639s Jan 13 06:27:09.733: INFO: Pod "pod-projected-configmaps-3ca6fe52-ef7c-40e8-9164-a4ccfb2e5742": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036238198s Jan 13 06:27:11.741: INFO: Pod "pod-projected-configmaps-3ca6fe52-ef7c-40e8-9164-a4ccfb2e5742": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.044560558s STEP: Saw pod success Jan 13 06:27:11.741: INFO: Pod "pod-projected-configmaps-3ca6fe52-ef7c-40e8-9164-a4ccfb2e5742" satisfied condition "Succeeded or Failed" Jan 13 06:27:11.746: INFO: Trying to get logs from node leguer-worker2 pod pod-projected-configmaps-3ca6fe52-ef7c-40e8-9164-a4ccfb2e5742 container agnhost-container: STEP: delete the pod Jan 13 06:27:11.784: INFO: Waiting for pod pod-projected-configmaps-3ca6fe52-ef7c-40e8-9164-a4ccfb2e5742 to disappear Jan 13 06:27:11.798: INFO: Pod pod-projected-configmaps-3ca6fe52-ef7c-40e8-9164-a4ccfb2e5742 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:27:11.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2475" for this suite. • [SLOW TEST:6.287 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":26,"skipped":507,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:27:11.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 13 06:27:15.098: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 13 06:27:17.261: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746116035, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746116035, loc:(*time.Location)(0x7089440)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746116035, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746116035, loc:(*time.Location)(0x7089440)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 13 06:27:20.296: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Jan 13 06:27:20.334: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:27:20.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4143" for this suite. STEP: Destroying namespace "webhook-4143-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:8.691 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":309,"completed":27,"skipped":508,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:27:20.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jan 13 06:27:25.177: INFO: Successfully updated pod "pod-update-ad2d0f73-cbb8-4814-ae08-bde6c634c8d1" STEP: verifying the updated pod is in kubernetes Jan 13 06:27:25.260: INFO: Pod update OK [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:27:25.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9053" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":309,"completed":28,"skipped":540,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:27:25.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 06:27:25.356: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jan 13 06:27:48.168: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3569 --namespace=crd-publish-openapi-3569 create -f -' Jan 13 06:27:54.383: INFO: stderr: "" Jan 13 06:27:54.384: INFO: stdout: "e2e-test-crd-publish-openapi-1409-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Jan 13 06:27:54.385: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3569 --namespace=crd-publish-openapi-3569 delete e2e-test-crd-publish-openapi-1409-crds test-cr' Jan 13 06:27:55.686: INFO: stderr: "" Jan 13 06:27:55.686: INFO: stdout: "e2e-test-crd-publish-openapi-1409-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Jan 13 06:27:55.687: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3569 --namespace=crd-publish-openapi-3569 apply -f -' Jan 13 06:27:58.144: INFO: stderr: "" Jan 13 06:27:58.144: INFO: stdout: "e2e-test-crd-publish-openapi-1409-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Jan 13 06:27:58.145: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3569 --namespace=crd-publish-openapi-3569 delete e2e-test-crd-publish-openapi-1409-crds test-cr' Jan 13 06:27:59.479: INFO: stderr: "" Jan 13 06:27:59.479: INFO: stdout: "e2e-test-crd-publish-openapi-1409-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Jan 13 06:27:59.480: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3569 explain e2e-test-crd-publish-openapi-1409-crds' Jan 13 06:28:01.892: INFO: stderr: "" Jan 13 06:28:01.892: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1409-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:28:25.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3569" for this suite. • [SLOW TEST:59.759 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":309,"completed":29,"skipped":545,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:28:25.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:29:25.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6094" for this suite. • [SLOW TEST:60.212 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":309,"completed":30,"skipped":590,"failed":0} SSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:29:25.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:30:00.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8659" for this suite. • [SLOW TEST:35.552 seconds] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when starting a container that exits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":309,"completed":31,"skipped":597,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:30:00.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 13 06:30:04.457: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 13 06:30:06.549: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746116204, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746116204, loc:(*time.Location)(0x7089440)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746116204, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746116204, loc:(*time.Location)(0x7089440)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 13 06:30:08.559: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746116204, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746116204, loc:(*time.Location)(0x7089440)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746116204, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746116204, loc:(*time.Location)(0x7089440)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 13 06:30:11.587: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:30:12.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-711" for this suite. STEP: Destroying namespace "webhook-711-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:11.995 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":309,"completed":32,"skipped":604,"failed":0} SSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:30:12.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 06:30:12.991: INFO: The status of Pod test-webserver-a334dbfc-a4a8-467d-ba55-6d07695dc902 is Pending, waiting for it to be Running (with Ready = true) Jan 13 06:30:15.044: INFO: The status of Pod test-webserver-a334dbfc-a4a8-467d-ba55-6d07695dc902 is Pending, waiting for it to be Running (with Ready = true) Jan 13 06:30:16.999: INFO: The status of Pod test-webserver-a334dbfc-a4a8-467d-ba55-6d07695dc902 is Running (Ready = false) Jan 13 06:30:18.998: INFO: The status of Pod test-webserver-a334dbfc-a4a8-467d-ba55-6d07695dc902 is Running (Ready = false) Jan 13 06:30:20.998: INFO: The status of Pod test-webserver-a334dbfc-a4a8-467d-ba55-6d07695dc902 is Running (Ready = false) Jan 13 06:30:22.999: INFO: The status of Pod test-webserver-a334dbfc-a4a8-467d-ba55-6d07695dc902 is Running (Ready = false) Jan 13 06:30:24.998: INFO: The status of Pod test-webserver-a334dbfc-a4a8-467d-ba55-6d07695dc902 is Running (Ready = false) Jan 13 06:30:26.999: INFO: The status of Pod test-webserver-a334dbfc-a4a8-467d-ba55-6d07695dc902 is Running (Ready = false) Jan 13 06:30:28.997: INFO: The status of Pod test-webserver-a334dbfc-a4a8-467d-ba55-6d07695dc902 is Running (Ready = false) Jan 13 06:30:30.999: INFO: The status of Pod test-webserver-a334dbfc-a4a8-467d-ba55-6d07695dc902 is Running (Ready = false) Jan 13 06:30:32.999: INFO: The status of Pod test-webserver-a334dbfc-a4a8-467d-ba55-6d07695dc902 is Running (Ready = false) Jan 13 06:30:34.999: INFO: The status of Pod test-webserver-a334dbfc-a4a8-467d-ba55-6d07695dc902 is Running (Ready = true) Jan 13 06:30:35.005: INFO: Container started at 2021-01-13 06:30:15 +0000 UTC, pod became ready at 2021-01-13 06:30:33 +0000 UTC [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:30:35.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-727" for this suite. • [SLOW TEST:22.218 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":309,"completed":33,"skipped":607,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:30:35.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0113 06:30:45.383750 10 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Jan 13 06:31:47.413: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:31:47.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7762" for this suite. • [SLOW TEST:72.409 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":309,"completed":34,"skipped":616,"failed":0} [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:31:47.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating secret with name secret-test-map-3110017b-c536-43f8-b5e6-4b6023e82ef2 STEP: Creating a pod to test consume secrets Jan 13 06:31:47.658: INFO: Waiting up to 5m0s for pod "pod-secrets-600c9cbc-c09f-45e3-8c6d-c94102b7ea1e" in namespace "secrets-3705" to be "Succeeded or Failed" Jan 13 06:31:47.675: INFO: Pod "pod-secrets-600c9cbc-c09f-45e3-8c6d-c94102b7ea1e": Phase="Pending", Reason="", readiness=false. Elapsed: 17.304699ms Jan 13 06:31:49.684: INFO: Pod "pod-secrets-600c9cbc-c09f-45e3-8c6d-c94102b7ea1e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025675999s Jan 13 06:31:51.692: INFO: Pod "pod-secrets-600c9cbc-c09f-45e3-8c6d-c94102b7ea1e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034309027s STEP: Saw pod success Jan 13 06:31:51.693: INFO: Pod "pod-secrets-600c9cbc-c09f-45e3-8c6d-c94102b7ea1e" satisfied condition "Succeeded or Failed" Jan 13 06:31:51.698: INFO: Trying to get logs from node leguer-worker pod pod-secrets-600c9cbc-c09f-45e3-8c6d-c94102b7ea1e container secret-volume-test: STEP: delete the pod Jan 13 06:31:51.750: INFO: Waiting for pod pod-secrets-600c9cbc-c09f-45e3-8c6d-c94102b7ea1e to disappear Jan 13 06:31:51.758: INFO: Pod pod-secrets-600c9cbc-c09f-45e3-8c6d-c94102b7ea1e no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:31:51.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3705" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":309,"completed":35,"skipped":616,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:31:51.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:129 [It] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 06:31:51.909: INFO: Create a RollingUpdate DaemonSet Jan 13 06:31:51.948: INFO: Check that daemon pods launch on every node of the cluster Jan 13 06:31:51.989: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:31:52.007: INFO: Number of nodes with available pods: 0 Jan 13 06:31:52.008: INFO: Node leguer-worker is running more than one daemon pod Jan 13 06:31:53.021: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:31:53.052: INFO: Number of nodes with available pods: 0 Jan 13 06:31:53.052: INFO: Node leguer-worker is running more than one daemon pod Jan 13 06:31:54.018: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:31:54.023: INFO: Number of nodes with available pods: 0 Jan 13 06:31:54.024: INFO: Node leguer-worker is running more than one daemon pod Jan 13 06:31:55.104: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:31:55.110: INFO: Number of nodes with available pods: 0 Jan 13 06:31:55.110: INFO: Node leguer-worker is running more than one daemon pod Jan 13 06:31:56.022: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:31:56.028: INFO: Number of nodes with available pods: 1 Jan 13 06:31:56.028: INFO: Node leguer-worker is running more than one daemon pod Jan 13 06:31:57.020: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:31:57.026: INFO: Number of nodes with available pods: 2 Jan 13 06:31:57.026: INFO: Number of running nodes: 2, number of available pods: 2 Jan 13 06:31:57.026: INFO: Update the DaemonSet to trigger a rollout Jan 13 06:31:57.053: INFO: Updating DaemonSet daemon-set Jan 13 06:32:10.109: INFO: Roll back the DaemonSet before rollout is complete Jan 13 06:32:10.118: INFO: Updating DaemonSet daemon-set Jan 13 06:32:10.118: INFO: Make sure DaemonSet rollback is complete Jan 13 06:32:10.146: INFO: Wrong image for pod: daemon-set-gw5x9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 13 06:32:10.146: INFO: Pod daemon-set-gw5x9 is not available Jan 13 06:32:10.164: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:32:11.173: INFO: Wrong image for pod: daemon-set-gw5x9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 13 06:32:11.173: INFO: Pod daemon-set-gw5x9 is not available Jan 13 06:32:11.180: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:32:12.304: INFO: Wrong image for pod: daemon-set-gw5x9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 13 06:32:12.305: INFO: Pod daemon-set-gw5x9 is not available Jan 13 06:32:12.384: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:32:13.174: INFO: Wrong image for pod: daemon-set-gw5x9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 13 06:32:13.174: INFO: Pod daemon-set-gw5x9 is not available Jan 13 06:32:13.182: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:32:14.175: INFO: Wrong image for pod: daemon-set-gw5x9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 13 06:32:14.175: INFO: Pod daemon-set-gw5x9 is not available Jan 13 06:32:14.185: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:32:15.173: INFO: Wrong image for pod: daemon-set-gw5x9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 13 06:32:15.174: INFO: Pod daemon-set-gw5x9 is not available Jan 13 06:32:15.184: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:32:16.175: INFO: Wrong image for pod: daemon-set-gw5x9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 13 06:32:16.175: INFO: Pod daemon-set-gw5x9 is not available Jan 13 06:32:16.186: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:32:17.176: INFO: Wrong image for pod: daemon-set-gw5x9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 13 06:32:17.176: INFO: Pod daemon-set-gw5x9 is not available Jan 13 06:32:17.185: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:32:18.175: INFO: Wrong image for pod: daemon-set-gw5x9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 13 06:32:18.175: INFO: Pod daemon-set-gw5x9 is not available Jan 13 06:32:18.187: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:32:19.189: INFO: Wrong image for pod: daemon-set-gw5x9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 13 06:32:19.189: INFO: Pod daemon-set-gw5x9 is not available Jan 13 06:32:19.199: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:32:20.174: INFO: Wrong image for pod: daemon-set-gw5x9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 13 06:32:20.174: INFO: Pod daemon-set-gw5x9 is not available Jan 13 06:32:20.182: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:32:21.219: INFO: Wrong image for pod: daemon-set-gw5x9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 13 06:32:21.219: INFO: Pod daemon-set-gw5x9 is not available Jan 13 06:32:21.226: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:32:22.173: INFO: Wrong image for pod: daemon-set-gw5x9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 13 06:32:22.173: INFO: Pod daemon-set-gw5x9 is not available Jan 13 06:32:22.183: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:32:23.173: INFO: Wrong image for pod: daemon-set-gw5x9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 13 06:32:23.173: INFO: Pod daemon-set-gw5x9 is not available Jan 13 06:32:23.183: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:32:24.352: INFO: Wrong image for pod: daemon-set-gw5x9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 13 06:32:24.352: INFO: Pod daemon-set-gw5x9 is not available Jan 13 06:32:24.360: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:32:25.190: INFO: Wrong image for pod: daemon-set-gw5x9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 13 06:32:25.190: INFO: Pod daemon-set-gw5x9 is not available Jan 13 06:32:25.197: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:32:26.185: INFO: Wrong image for pod: daemon-set-gw5x9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 13 06:32:26.185: INFO: Pod daemon-set-gw5x9 is not available Jan 13 06:32:26.206: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:32:27.176: INFO: Wrong image for pod: daemon-set-gw5x9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 13 06:32:27.176: INFO: Pod daemon-set-gw5x9 is not available Jan 13 06:32:27.184: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:32:28.174: INFO: Wrong image for pod: daemon-set-gw5x9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 13 06:32:28.174: INFO: Pod daemon-set-gw5x9 is not available Jan 13 06:32:28.183: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:32:29.174: INFO: Wrong image for pod: daemon-set-gw5x9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 13 06:32:29.174: INFO: Pod daemon-set-gw5x9 is not available Jan 13 06:32:29.182: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:32:30.175: INFO: Wrong image for pod: daemon-set-gw5x9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 13 06:32:30.175: INFO: Pod daemon-set-gw5x9 is not available Jan 13 06:32:30.187: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:32:31.185: INFO: Wrong image for pod: daemon-set-gw5x9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 13 06:32:31.185: INFO: Pod daemon-set-gw5x9 is not available Jan 13 06:32:31.194: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:32:32.175: INFO: Wrong image for pod: daemon-set-gw5x9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 13 06:32:32.175: INFO: Pod daemon-set-gw5x9 is not available Jan 13 06:32:32.185: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:32:33.176: INFO: Wrong image for pod: daemon-set-gw5x9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 13 06:32:33.176: INFO: Pod daemon-set-gw5x9 is not available Jan 13 06:32:33.206: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:32:34.176: INFO: Wrong image for pod: daemon-set-gw5x9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 13 06:32:34.176: INFO: Pod daemon-set-gw5x9 is not available Jan 13 06:32:34.185: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:32:35.174: INFO: Wrong image for pod: daemon-set-gw5x9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 13 06:32:35.174: INFO: Pod daemon-set-gw5x9 is not available Jan 13 06:32:35.182: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:32:36.175: INFO: Wrong image for pod: daemon-set-gw5x9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 13 06:32:36.176: INFO: Pod daemon-set-gw5x9 is not available Jan 13 06:32:36.186: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:32:37.174: INFO: Wrong image for pod: daemon-set-gw5x9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 13 06:32:37.174: INFO: Pod daemon-set-gw5x9 is not available Jan 13 06:32:37.184: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:32:38.175: INFO: Wrong image for pod: daemon-set-gw5x9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 13 06:32:38.175: INFO: Pod daemon-set-gw5x9 is not available Jan 13 06:32:38.185: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:32:39.174: INFO: Wrong image for pod: daemon-set-gw5x9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 13 06:32:39.174: INFO: Pod daemon-set-gw5x9 is not available Jan 13 06:32:39.181: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:32:40.176: INFO: Wrong image for pod: daemon-set-gw5x9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 13 06:32:40.176: INFO: Pod daemon-set-gw5x9 is not available Jan 13 06:32:40.187: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:32:41.174: INFO: Wrong image for pod: daemon-set-gw5x9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 13 06:32:41.174: INFO: Pod daemon-set-gw5x9 is not available Jan 13 06:32:41.182: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:32:42.175: INFO: Wrong image for pod: daemon-set-gw5x9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 13 06:32:42.175: INFO: Pod daemon-set-gw5x9 is not available Jan 13 06:32:42.185: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:32:43.175: INFO: Wrong image for pod: daemon-set-gw5x9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 13 06:32:43.175: INFO: Pod daemon-set-gw5x9 is not available Jan 13 06:32:43.184: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:32:44.175: INFO: Wrong image for pod: daemon-set-gw5x9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 13 06:32:44.175: INFO: Pod daemon-set-gw5x9 is not available Jan 13 06:32:44.186: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:32:45.174: INFO: Wrong image for pod: daemon-set-gw5x9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 13 06:32:45.174: INFO: Pod daemon-set-gw5x9 is not available Jan 13 06:32:45.181: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:32:46.175: INFO: Wrong image for pod: daemon-set-gw5x9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 13 06:32:46.175: INFO: Pod daemon-set-gw5x9 is not available Jan 13 06:32:46.186: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:32:47.175: INFO: Wrong image for pod: daemon-set-gw5x9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 13 06:32:47.175: INFO: Pod daemon-set-gw5x9 is not available Jan 13 06:32:47.185: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:32:48.175: INFO: Wrong image for pod: daemon-set-gw5x9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 13 06:32:48.175: INFO: Pod daemon-set-gw5x9 is not available Jan 13 06:32:48.185: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:32:49.175: INFO: Wrong image for pod: daemon-set-gw5x9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 13 06:32:49.176: INFO: Pod daemon-set-gw5x9 is not available Jan 13 06:32:49.184: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:32:50.174: INFO: Wrong image for pod: daemon-set-gw5x9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 13 06:32:50.174: INFO: Pod daemon-set-gw5x9 is not available Jan 13 06:32:50.184: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:32:51.174: INFO: Wrong image for pod: daemon-set-gw5x9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 13 06:32:51.174: INFO: Pod daemon-set-gw5x9 is not available Jan 13 06:32:51.181: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:32:52.173: INFO: Wrong image for pod: daemon-set-gw5x9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 13 06:32:52.173: INFO: Pod daemon-set-gw5x9 is not available Jan 13 06:32:52.181: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:32:53.175: INFO: Wrong image for pod: daemon-set-gw5x9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 13 06:32:53.175: INFO: Pod daemon-set-gw5x9 is not available Jan 13 06:32:53.183: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:32:54.175: INFO: Wrong image for pod: daemon-set-gw5x9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 13 06:32:54.175: INFO: Pod daemon-set-gw5x9 is not available Jan 13 06:32:54.184: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:32:55.174: INFO: Wrong image for pod: daemon-set-gw5x9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 13 06:32:55.174: INFO: Pod daemon-set-gw5x9 is not available Jan 13 06:32:55.181: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:32:56.175: INFO: Wrong image for pod: daemon-set-gw5x9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 13 06:32:56.175: INFO: Pod daemon-set-gw5x9 is not available Jan 13 06:32:56.186: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:32:57.173: INFO: Wrong image for pod: daemon-set-gw5x9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 13 06:32:57.173: INFO: Pod daemon-set-gw5x9 is not available Jan 13 06:32:57.182: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:32:58.173: INFO: Wrong image for pod: daemon-set-gw5x9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 13 06:32:58.173: INFO: Pod daemon-set-gw5x9 is not available Jan 13 06:32:58.183: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:32:59.174: INFO: Wrong image for pod: daemon-set-gw5x9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 13 06:32:59.174: INFO: Pod daemon-set-gw5x9 is not available Jan 13 06:32:59.185: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:33:00.173: INFO: Wrong image for pod: daemon-set-gw5x9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 13 06:33:00.173: INFO: Pod daemon-set-gw5x9 is not available Jan 13 06:33:00.184: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:33:01.175: INFO: Wrong image for pod: daemon-set-gw5x9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 13 06:33:01.175: INFO: Pod daemon-set-gw5x9 is not available Jan 13 06:33:01.185: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:33:02.174: INFO: Wrong image for pod: daemon-set-gw5x9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 13 06:33:02.174: INFO: Pod daemon-set-gw5x9 is not available Jan 13 06:33:02.183: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:33:03.176: INFO: Wrong image for pod: daemon-set-gw5x9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 13 06:33:03.176: INFO: Pod daemon-set-gw5x9 is not available Jan 13 06:33:03.184: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:33:04.171: INFO: Wrong image for pod: daemon-set-gw5x9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 13 06:33:04.172: INFO: Pod daemon-set-gw5x9 is not available Jan 13 06:33:04.200: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:33:05.176: INFO: Wrong image for pod: daemon-set-gw5x9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 13 06:33:05.177: INFO: Pod daemon-set-gw5x9 is not available Jan 13 06:33:05.183: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:33:06.172: INFO: Wrong image for pod: daemon-set-gw5x9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 13 06:33:06.172: INFO: Pod daemon-set-gw5x9 is not available Jan 13 06:33:06.181: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:33:07.175: INFO: Wrong image for pod: daemon-set-gw5x9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 13 06:33:07.175: INFO: Pod daemon-set-gw5x9 is not available Jan 13 06:33:07.181: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:33:08.175: INFO: Wrong image for pod: daemon-set-gw5x9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 13 06:33:08.175: INFO: Pod daemon-set-gw5x9 is not available Jan 13 06:33:08.184: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:33:09.175: INFO: Wrong image for pod: daemon-set-gw5x9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Jan 13 06:33:09.175: INFO: Pod daemon-set-gw5x9 is not available Jan 13 06:33:09.185: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 06:33:10.174: INFO: Pod daemon-set-kxjf4 is not available Jan 13 06:33:10.183: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:95 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2112, will wait for the garbage collector to delete the pods Jan 13 06:33:10.271: INFO: Deleting DaemonSet.extensions daemon-set took: 7.79538ms Jan 13 06:33:10.872: INFO: Terminating DaemonSet.extensions daemon-set pods took: 600.689584ms Jan 13 06:34:20.183: INFO: Number of nodes with available pods: 0 Jan 13 06:34:20.183: INFO: Number of running nodes: 0, number of available pods: 0 Jan 13 06:34:20.191: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"487952"},"items":null} Jan 13 06:34:20.195: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"487952"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:34:20.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2112" for this suite. • [SLOW TEST:148.449 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":309,"completed":36,"skipped":647,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:34:20.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating the pod Jan 13 06:34:20.322: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:34:29.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9014" for this suite. • [SLOW TEST:9.350 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":309,"completed":37,"skipped":670,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:34:29.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1392 STEP: creating an pod Jan 13 06:34:30.099: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8015 run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.21 --restart=Never -- logs-generator --log-lines-total 100 --run-duration 20s' Jan 13 06:34:31.459: INFO: stderr: "" Jan 13 06:34:31.459: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Waiting for log generator to start. Jan 13 06:34:31.460: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Jan 13 06:34:31.460: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-8015" to be "running and ready, or succeeded" Jan 13 06:34:31.466: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 5.380982ms Jan 13 06:34:33.473: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013107054s Jan 13 06:34:35.481: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.021003978s Jan 13 06:34:35.481: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Jan 13 06:34:35.482: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Jan 13 06:34:35.482: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8015 logs logs-generator logs-generator' Jan 13 06:34:36.868: INFO: stderr: "" Jan 13 06:34:36.868: INFO: stdout: "I0113 06:34:34.274121 1 logs_generator.go:76] 0 POST /api/v1/namespaces/kube-system/pods/zmp 386\nI0113 06:34:34.474259 1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/rfv6 543\nI0113 06:34:34.674274 1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/rnc2 200\nI0113 06:34:34.874317 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/ns/pods/lsv 441\nI0113 06:34:35.074269 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/p9dz 394\nI0113 06:34:35.274245 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/pgm 249\nI0113 06:34:35.474325 1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/fc86 526\nI0113 06:34:35.674277 1 logs_generator.go:76] 7 GET /api/v1/namespaces/kube-system/pods/f6s 327\nI0113 06:34:35.874270 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/4pvs 580\nI0113 06:34:36.074224 1 logs_generator.go:76] 9 POST /api/v1/namespaces/ns/pods/qbgb 249\nI0113 06:34:36.274300 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/kube-system/pods/tkc 401\nI0113 06:34:36.474242 1 logs_generator.go:76] 11 POST /api/v1/namespaces/ns/pods/9rtb 541\nI0113 06:34:36.674269 1 logs_generator.go:76] 12 GET /api/v1/namespaces/default/pods/45j 232\n" STEP: limiting log lines Jan 13 06:34:36.869: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8015 logs logs-generator logs-generator --tail=1' Jan 13 06:34:38.329: INFO: stderr: "" Jan 13 06:34:38.329: INFO: stdout: "I0113 06:34:38.274299 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/kube-system/pods/992 361\n" Jan 13 06:34:38.329: INFO: got output "I0113 06:34:38.274299 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/kube-system/pods/992 361\n" STEP: limiting log bytes Jan 13 06:34:38.331: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8015 logs logs-generator logs-generator --limit-bytes=1' Jan 13 06:34:39.708: INFO: stderr: "" Jan 13 06:34:39.709: INFO: stdout: "I" Jan 13 06:34:39.709: INFO: got output "I" STEP: exposing timestamps Jan 13 06:34:39.709: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8015 logs logs-generator logs-generator --tail=1 --timestamps' Jan 13 06:34:41.091: INFO: stderr: "" Jan 13 06:34:41.091: INFO: stdout: "2021-01-13T06:34:41.074381138Z I0113 06:34:41.074231 1 logs_generator.go:76] 34 PUT /api/v1/namespaces/kube-system/pods/lfg 431\n" Jan 13 06:34:41.091: INFO: got output "2021-01-13T06:34:41.074381138Z I0113 06:34:41.074231 1 logs_generator.go:76] 34 PUT /api/v1/namespaces/kube-system/pods/lfg 431\n" STEP: restricting to a time range Jan 13 06:34:43.594: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8015 logs logs-generator logs-generator --since=1s' Jan 13 06:34:44.962: INFO: stderr: "" Jan 13 06:34:44.962: INFO: stdout: "I0113 06:34:44.074252 1 logs_generator.go:76] 49 GET /api/v1/namespaces/kube-system/pods/sf9 455\nI0113 06:34:44.274267 1 logs_generator.go:76] 50 GET /api/v1/namespaces/kube-system/pods/w5f 284\nI0113 06:34:44.474252 1 logs_generator.go:76] 51 GET /api/v1/namespaces/default/pods/5c7r 303\nI0113 06:34:44.674261 1 logs_generator.go:76] 52 GET /api/v1/namespaces/kube-system/pods/x4h 328\nI0113 06:34:44.874250 1 logs_generator.go:76] 53 PUT /api/v1/namespaces/kube-system/pods/82q 363\n" Jan 13 06:34:44.963: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8015 logs logs-generator logs-generator --since=24h' Jan 13 06:34:46.286: INFO: stderr: "" Jan 13 06:34:46.286: INFO: stdout: "I0113 06:34:34.274121 1 logs_generator.go:76] 0 POST /api/v1/namespaces/kube-system/pods/zmp 386\nI0113 06:34:34.474259 1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/rfv6 543\nI0113 06:34:34.674274 1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/rnc2 200\nI0113 06:34:34.874317 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/ns/pods/lsv 441\nI0113 06:34:35.074269 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/p9dz 394\nI0113 06:34:35.274245 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/pgm 249\nI0113 06:34:35.474325 1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/fc86 526\nI0113 06:34:35.674277 1 logs_generator.go:76] 7 GET /api/v1/namespaces/kube-system/pods/f6s 327\nI0113 06:34:35.874270 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/4pvs 580\nI0113 06:34:36.074224 1 logs_generator.go:76] 9 POST /api/v1/namespaces/ns/pods/qbgb 249\nI0113 06:34:36.274300 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/kube-system/pods/tkc 401\nI0113 06:34:36.474242 1 logs_generator.go:76] 11 POST /api/v1/namespaces/ns/pods/9rtb 541\nI0113 06:34:36.674269 1 logs_generator.go:76] 12 GET /api/v1/namespaces/default/pods/45j 232\nI0113 06:34:36.874233 1 logs_generator.go:76] 13 POST /api/v1/namespaces/default/pods/nw2z 220\nI0113 06:34:37.074322 1 logs_generator.go:76] 14 GET /api/v1/namespaces/ns/pods/zf7m 366\nI0113 06:34:37.274262 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/ns/pods/w2w 524\nI0113 06:34:37.474172 1 logs_generator.go:76] 16 POST /api/v1/namespaces/ns/pods/pbj 310\nI0113 06:34:37.674239 1 logs_generator.go:76] 17 GET /api/v1/namespaces/kube-system/pods/6b7 515\nI0113 06:34:37.874237 1 logs_generator.go:76] 18 POST /api/v1/namespaces/ns/pods/kp8 550\nI0113 06:34:38.074261 1 logs_generator.go:76] 19 POST /api/v1/namespaces/ns/pods/8mb5 297\nI0113 06:34:38.274299 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/kube-system/pods/992 361\nI0113 06:34:38.474321 1 logs_generator.go:76] 21 POST /api/v1/namespaces/default/pods/xjxv 223\nI0113 06:34:38.674293 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/default/pods/rmgp 320\nI0113 06:34:38.874236 1 logs_generator.go:76] 23 GET /api/v1/namespaces/ns/pods/hks 543\nI0113 06:34:39.074318 1 logs_generator.go:76] 24 PUT /api/v1/namespaces/default/pods/5bjl 213\nI0113 06:34:39.274231 1 logs_generator.go:76] 25 GET /api/v1/namespaces/kube-system/pods/5dd5 287\nI0113 06:34:39.474238 1 logs_generator.go:76] 26 PUT /api/v1/namespaces/kube-system/pods/czk 450\nI0113 06:34:39.674329 1 logs_generator.go:76] 27 PUT /api/v1/namespaces/ns/pods/kmgl 474\nI0113 06:34:39.874248 1 logs_generator.go:76] 28 PUT /api/v1/namespaces/kube-system/pods/qfn 506\nI0113 06:34:40.074286 1 logs_generator.go:76] 29 POST /api/v1/namespaces/ns/pods/blhs 473\nI0113 06:34:40.274272 1 logs_generator.go:76] 30 GET /api/v1/namespaces/ns/pods/lqs 294\nI0113 06:34:40.474311 1 logs_generator.go:76] 31 PUT /api/v1/namespaces/kube-system/pods/ng2 509\nI0113 06:34:40.674235 1 logs_generator.go:76] 32 GET /api/v1/namespaces/ns/pods/jvx 403\nI0113 06:34:40.874271 1 logs_generator.go:76] 33 PUT /api/v1/namespaces/ns/pods/l9nc 400\nI0113 06:34:41.074231 1 logs_generator.go:76] 34 PUT /api/v1/namespaces/kube-system/pods/lfg 431\nI0113 06:34:41.274310 1 logs_generator.go:76] 35 POST /api/v1/namespaces/ns/pods/bncc 409\nI0113 06:34:41.474271 1 logs_generator.go:76] 36 POST /api/v1/namespaces/kube-system/pods/lt6d 411\nI0113 06:34:41.674300 1 logs_generator.go:76] 37 POST /api/v1/namespaces/ns/pods/26sq 462\nI0113 06:34:41.874266 1 logs_generator.go:76] 38 POST /api/v1/namespaces/ns/pods/n67j 246\nI0113 06:34:42.074275 1 logs_generator.go:76] 39 POST /api/v1/namespaces/kube-system/pods/rp9 407\nI0113 06:34:42.274265 1 logs_generator.go:76] 40 PUT /api/v1/namespaces/kube-system/pods/4r7 268\nI0113 06:34:42.474297 1 logs_generator.go:76] 41 POST /api/v1/namespaces/ns/pods/qzq 541\nI0113 06:34:42.674271 1 logs_generator.go:76] 42 POST /api/v1/namespaces/default/pods/csn7 550\nI0113 06:34:42.874273 1 logs_generator.go:76] 43 POST /api/v1/namespaces/ns/pods/lmmw 471\nI0113 06:34:43.074335 1 logs_generator.go:76] 44 PUT /api/v1/namespaces/kube-system/pods/ls2 524\nI0113 06:34:43.274276 1 logs_generator.go:76] 45 POST /api/v1/namespaces/kube-system/pods/ztwq 519\nI0113 06:34:43.474323 1 logs_generator.go:76] 46 PUT /api/v1/namespaces/kube-system/pods/4gcq 514\nI0113 06:34:43.674271 1 logs_generator.go:76] 47 POST /api/v1/namespaces/kube-system/pods/78n 586\nI0113 06:34:43.874246 1 logs_generator.go:76] 48 POST /api/v1/namespaces/default/pods/rg7 299\nI0113 06:34:44.074252 1 logs_generator.go:76] 49 GET /api/v1/namespaces/kube-system/pods/sf9 455\nI0113 06:34:44.274267 1 logs_generator.go:76] 50 GET /api/v1/namespaces/kube-system/pods/w5f 284\nI0113 06:34:44.474252 1 logs_generator.go:76] 51 GET /api/v1/namespaces/default/pods/5c7r 303\nI0113 06:34:44.674261 1 logs_generator.go:76] 52 GET /api/v1/namespaces/kube-system/pods/x4h 328\nI0113 06:34:44.874250 1 logs_generator.go:76] 53 PUT /api/v1/namespaces/kube-system/pods/82q 363\nI0113 06:34:45.074269 1 logs_generator.go:76] 54 POST /api/v1/namespaces/default/pods/4nv6 224\nI0113 06:34:45.274242 1 logs_generator.go:76] 55 GET /api/v1/namespaces/default/pods/fjl 412\nI0113 06:34:45.474293 1 logs_generator.go:76] 56 GET /api/v1/namespaces/default/pods/wnw 434\nI0113 06:34:45.674263 1 logs_generator.go:76] 57 GET /api/v1/namespaces/ns/pods/5f57 250\nI0113 06:34:45.874259 1 logs_generator.go:76] 58 POST /api/v1/namespaces/ns/pods/pvq9 589\nI0113 06:34:46.074240 1 logs_generator.go:76] 59 PUT /api/v1/namespaces/kube-system/pods/fsq4 370\n" [AfterEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1397 Jan 13 06:34:46.289: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8015 delete pod logs-generator' Jan 13 06:35:20.205: INFO: stderr: "" Jan 13 06:35:20.205: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:35:20.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8015" for this suite. • [SLOW TEST:50.633 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1389 should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":309,"completed":38,"skipped":712,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:35:20.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 13 06:35:22.373: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 13 06:35:24.394: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746116522, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746116522, loc:(*time.Location)(0x7089440)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746116522, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746116522, loc:(*time.Location)(0x7089440)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 13 06:35:27.712: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:35:37.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8942" for this suite. STEP: Destroying namespace "webhook-8942-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:18.216 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":309,"completed":39,"skipped":720,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:35:38.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test substitution in container's command Jan 13 06:35:38.634: INFO: Waiting up to 5m0s for pod "var-expansion-b71c809e-faa9-487f-bb91-ae78821f2d02" in namespace "var-expansion-8492" to be "Succeeded or Failed" Jan 13 06:35:38.698: INFO: Pod "var-expansion-b71c809e-faa9-487f-bb91-ae78821f2d02": Phase="Pending", Reason="", readiness=false. Elapsed: 63.5426ms Jan 13 06:35:40.707: INFO: Pod "var-expansion-b71c809e-faa9-487f-bb91-ae78821f2d02": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072945924s Jan 13 06:35:42.715: INFO: Pod "var-expansion-b71c809e-faa9-487f-bb91-ae78821f2d02": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.080437036s STEP: Saw pod success Jan 13 06:35:42.715: INFO: Pod "var-expansion-b71c809e-faa9-487f-bb91-ae78821f2d02" satisfied condition "Succeeded or Failed" Jan 13 06:35:42.720: INFO: Trying to get logs from node leguer-worker pod var-expansion-b71c809e-faa9-487f-bb91-ae78821f2d02 container dapi-container: STEP: delete the pod Jan 13 06:35:42.786: INFO: Waiting for pod var-expansion-b71c809e-faa9-487f-bb91-ae78821f2d02 to disappear Jan 13 06:35:42.811: INFO: Pod var-expansion-b71c809e-faa9-487f-bb91-ae78821f2d02 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:35:42.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8492" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":309,"completed":40,"skipped":744,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:35:42.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Jan 13 06:35:42.986: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2c8951ac-bced-4bad-a133-cbc5d10708b9" in namespace "projected-1175" to be "Succeeded or Failed" Jan 13 06:35:43.007: INFO: Pod "downwardapi-volume-2c8951ac-bced-4bad-a133-cbc5d10708b9": Phase="Pending", Reason="", readiness=false. Elapsed: 21.387083ms Jan 13 06:35:45.016: INFO: Pod "downwardapi-volume-2c8951ac-bced-4bad-a133-cbc5d10708b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030328803s Jan 13 06:35:47.024: INFO: Pod "downwardapi-volume-2c8951ac-bced-4bad-a133-cbc5d10708b9": Phase="Running", Reason="", readiness=true. Elapsed: 4.037813355s Jan 13 06:35:49.033: INFO: Pod "downwardapi-volume-2c8951ac-bced-4bad-a133-cbc5d10708b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.04695228s STEP: Saw pod success Jan 13 06:35:49.033: INFO: Pod "downwardapi-volume-2c8951ac-bced-4bad-a133-cbc5d10708b9" satisfied condition "Succeeded or Failed" Jan 13 06:35:49.038: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-2c8951ac-bced-4bad-a133-cbc5d10708b9 container client-container: STEP: delete the pod Jan 13 06:35:49.150: INFO: Waiting for pod downwardapi-volume-2c8951ac-bced-4bad-a133-cbc5d10708b9 to disappear Jan 13 06:35:49.156: INFO: Pod downwardapi-volume-2c8951ac-bced-4bad-a133-cbc5d10708b9 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:35:49.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1175" for this suite. • [SLOW TEST:6.340 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":309,"completed":41,"skipped":798,"failed":0} SSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:35:49.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jan 13 06:35:57.385: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 13 06:35:57.404: INFO: Pod pod-with-poststart-exec-hook still exists Jan 13 06:35:59.405: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 13 06:35:59.414: INFO: Pod pod-with-poststart-exec-hook still exists Jan 13 06:36:01.405: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 13 06:36:01.418: INFO: Pod pod-with-poststart-exec-hook still exists Jan 13 06:36:03.405: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 13 06:36:03.425: INFO: Pod pod-with-poststart-exec-hook still exists Jan 13 06:36:05.405: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 13 06:36:05.413: INFO: Pod pod-with-poststart-exec-hook still exists Jan 13 06:36:07.405: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 13 06:36:07.412: INFO: Pod pod-with-poststart-exec-hook still exists Jan 13 06:36:09.405: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 13 06:36:09.414: INFO: Pod pod-with-poststart-exec-hook still exists Jan 13 06:36:11.405: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 13 06:36:11.454: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:36:11.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9470" for this suite. • [SLOW TEST:22.300 seconds] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:43 should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":309,"completed":42,"skipped":802,"failed":0} SSS ------------------------------ [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:36:11.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating service in namespace services-7286 STEP: creating service affinity-clusterip-transition in namespace services-7286 STEP: creating replication controller affinity-clusterip-transition in namespace services-7286 I0113 06:36:11.628142 10 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-7286, replica count: 3 I0113 06:36:14.681595 10 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0113 06:36:17.683906 10 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0113 06:36:20.684878 10 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 13 06:36:20.700: INFO: Creating new exec pod Jan 13 06:36:25.798: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-7286 exec execpod-affinityckr8z -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Jan 13 06:36:27.376: INFO: stderr: "I0113 06:36:27.257722 418 log.go:181] (0x4000c2e000) (0x40001bd0e0) Create stream\nI0113 06:36:27.260791 418 log.go:181] (0x4000c2e000) (0x40001bd0e0) Stream added, broadcasting: 1\nI0113 06:36:27.272064 418 log.go:181] (0x4000c2e000) Reply frame received for 1\nI0113 06:36:27.273109 418 log.go:181] (0x4000c2e000) (0x40004b8aa0) Create stream\nI0113 06:36:27.273250 418 log.go:181] (0x4000c2e000) (0x40004b8aa0) Stream added, broadcasting: 3\nI0113 06:36:27.274942 418 log.go:181] (0x4000c2e000) Reply frame received for 3\nI0113 06:36:27.275320 418 log.go:181] (0x4000c2e000) (0x40001bd860) Create stream\nI0113 06:36:27.275399 418 log.go:181] (0x4000c2e000) (0x40001bd860) Stream added, broadcasting: 5\nI0113 06:36:27.276653 418 log.go:181] (0x4000c2e000) Reply frame received for 5\nI0113 06:36:27.358599 418 log.go:181] (0x4000c2e000) Data frame received for 5\nI0113 06:36:27.358947 418 log.go:181] (0x4000c2e000) Data frame received for 3\nI0113 06:36:27.359062 418 log.go:181] (0x40004b8aa0) (3) Data frame handling\nI0113 06:36:27.359149 418 log.go:181] (0x40001bd860) (5) Data frame handling\nI0113 06:36:27.360110 418 log.go:181] (0x40001bd860) (5) Data frame sent\nI0113 06:36:27.360409 418 log.go:181] (0x4000c2e000) Data frame received for 1\nI0113 06:36:27.360643 418 log.go:181] (0x40001bd0e0) (1) Data frame handling\nI0113 06:36:27.360770 418 log.go:181] (0x40001bd0e0) (1) Data frame sent\nI0113 06:36:27.360949 418 log.go:181] (0x4000c2e000) Data frame received for 5\nI0113 06:36:27.361038 418 log.go:181] (0x40001bd860) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-clusterip-transition 80\nI0113 06:36:27.362090 418 log.go:181] (0x4000c2e000) (0x40001bd0e0) Stream removed, broadcasting: 1\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\nI0113 06:36:27.362911 418 log.go:181] (0x40001bd860) (5) Data frame sent\nI0113 06:36:27.363010 418 log.go:181] (0x4000c2e000) Data frame received for 5\nI0113 06:36:27.363104 418 log.go:181] (0x40001bd860) (5) Data frame handling\nI0113 06:36:27.364423 418 log.go:181] (0x4000c2e000) Go away received\nI0113 06:36:27.367045 418 log.go:181] (0x4000c2e000) (0x40001bd0e0) Stream removed, broadcasting: 1\nI0113 06:36:27.367426 418 log.go:181] (0x4000c2e000) (0x40004b8aa0) Stream removed, broadcasting: 3\nI0113 06:36:27.367693 418 log.go:181] (0x4000c2e000) (0x40001bd860) Stream removed, broadcasting: 5\n" Jan 13 06:36:27.377: INFO: stdout: "" Jan 13 06:36:27.384: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-7286 exec execpod-affinityckr8z -- /bin/sh -x -c nc -zv -t -w 2 10.96.1.147 80' Jan 13 06:36:28.991: INFO: stderr: "I0113 06:36:28.881066 439 log.go:181] (0x40001a8370) (0x4000378000) Create stream\nI0113 06:36:28.883587 439 log.go:181] (0x40001a8370) (0x4000378000) Stream added, broadcasting: 1\nI0113 06:36:28.897604 439 log.go:181] (0x40001a8370) Reply frame received for 1\nI0113 06:36:28.898172 439 log.go:181] (0x40001a8370) (0x4000b20640) Create stream\nI0113 06:36:28.898233 439 log.go:181] (0x40001a8370) (0x4000b20640) Stream added, broadcasting: 3\nI0113 06:36:28.899722 439 log.go:181] (0x40001a8370) Reply frame received for 3\nI0113 06:36:28.899992 439 log.go:181] (0x40001a8370) (0x40003780a0) Create stream\nI0113 06:36:28.900055 439 log.go:181] (0x40001a8370) (0x40003780a0) Stream added, broadcasting: 5\nI0113 06:36:28.901417 439 log.go:181] (0x40001a8370) Reply frame received for 5\nI0113 06:36:28.970302 439 log.go:181] (0x40001a8370) Data frame received for 3\nI0113 06:36:28.970694 439 log.go:181] (0x40001a8370) Data frame received for 1\nI0113 06:36:28.970904 439 log.go:181] (0x4000378000) (1) Data frame handling\nI0113 06:36:28.971084 439 log.go:181] (0x4000b20640) (3) Data frame handling\nI0113 06:36:28.971370 439 log.go:181] (0x40001a8370) Data frame received for 5\nI0113 06:36:28.971501 439 log.go:181] (0x40003780a0) (5) Data frame handling\nI0113 06:36:28.973134 439 log.go:181] (0x4000378000) (1) Data frame sent\nI0113 06:36:28.974665 439 log.go:181] (0x40003780a0) (5) Data frame sent\nI0113 06:36:28.974839 439 log.go:181] (0x40001a8370) Data frame received for 5\nI0113 06:36:28.974958 439 log.go:181] (0x40003780a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.1.147 80\nConnection to 10.96.1.147 80 port [tcp/http] succeeded!\nI0113 06:36:28.975423 439 log.go:181] (0x40001a8370) (0x4000378000) Stream removed, broadcasting: 1\nI0113 06:36:28.979192 439 log.go:181] (0x40001a8370) Go away received\nI0113 06:36:28.982874 439 log.go:181] (0x40001a8370) (0x4000378000) Stream removed, broadcasting: 1\nI0113 06:36:28.983341 439 log.go:181] (0x40001a8370) (0x4000b20640) Stream removed, broadcasting: 3\nI0113 06:36:28.983619 439 log.go:181] (0x40001a8370) (0x40003780a0) Stream removed, broadcasting: 5\n" Jan 13 06:36:28.992: INFO: stdout: "" Jan 13 06:36:29.010: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-7286 exec execpod-affinityckr8z -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.96.1.147:80/ ; done' Jan 13 06:36:30.679: INFO: stderr: "I0113 06:36:30.475849 459 log.go:181] (0x400003a420) (0x4000c761e0) Create stream\nI0113 06:36:30.480269 459 log.go:181] (0x400003a420) (0x4000c761e0) Stream added, broadcasting: 1\nI0113 06:36:30.491593 459 log.go:181] (0x400003a420) Reply frame received for 1\nI0113 06:36:30.492396 459 log.go:181] (0x400003a420) (0x4000956280) Create stream\nI0113 06:36:30.492477 459 log.go:181] (0x400003a420) (0x4000956280) Stream added, broadcasting: 3\nI0113 06:36:30.494227 459 log.go:181] (0x400003a420) Reply frame received for 3\nI0113 06:36:30.494564 459 log.go:181] (0x400003a420) (0x4000956320) Create stream\nI0113 06:36:30.494645 459 log.go:181] (0x400003a420) (0x4000956320) Stream added, broadcasting: 5\nI0113 06:36:30.496068 459 log.go:181] (0x400003a420) Reply frame received for 5\nI0113 06:36:30.571241 459 log.go:181] (0x400003a420) Data frame received for 5\nI0113 06:36:30.571685 459 log.go:181] (0x400003a420) Data frame received for 3\nI0113 06:36:30.571849 459 log.go:181] (0x4000956280) (3) Data frame handling\nI0113 06:36:30.571948 459 log.go:181] (0x4000956320) (5) Data frame handling\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.1.147:80/\nI0113 06:36:30.573379 459 log.go:181] (0x4000956320) (5) Data frame sent\nI0113 06:36:30.573561 459 log.go:181] (0x4000956280) (3) Data frame sent\nI0113 06:36:30.574768 459 log.go:181] (0x400003a420) Data frame received for 3\nI0113 06:36:30.574863 459 log.go:181] (0x4000956280) (3) Data frame handling\nI0113 06:36:30.574963 459 log.go:181] (0x4000956280) (3) Data frame sent\nI0113 06:36:30.575751 459 log.go:181] (0x400003a420) Data frame received for 5\nI0113 06:36:30.575825 459 log.go:181] (0x4000956320) (5) Data frame handling\nI0113 06:36:30.575884 459 log.go:181] (0x4000956320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2I0113 06:36:30.575947 459 log.go:181] (0x400003a420) Data frame received for 5\nI0113 06:36:30.575998 459 log.go:181] (0x4000956320) (5) Data frame handling\n http://10.96.1.147:80/\nI0113 06:36:30.576082 459 log.go:181] (0x400003a420) Data frame received for 3\nI0113 06:36:30.576202 459 log.go:181] (0x4000956280) (3) Data frame handling\nI0113 06:36:30.576321 459 log.go:181] (0x4000956320) (5) Data frame sent\nI0113 06:36:30.576455 459 log.go:181] (0x4000956280) (3) Data frame sent\nI0113 06:36:30.581236 459 log.go:181] (0x400003a420) Data frame received for 3\nI0113 06:36:30.581315 459 log.go:181] (0x4000956280) (3) Data frame handling\nI0113 06:36:30.581383 459 log.go:181] (0x4000956280) (3) Data frame sent\nI0113 06:36:30.582053 459 log.go:181] (0x400003a420) Data frame received for 5\nI0113 06:36:30.582163 459 log.go:181] (0x4000956320) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.1.147:80/\nI0113 06:36:30.582269 459 log.go:181] (0x400003a420) Data frame received for 3\nI0113 06:36:30.582398 459 log.go:181] (0x4000956280) (3) Data frame handling\nI0113 06:36:30.582501 459 log.go:181] (0x4000956320) (5) Data frame sent\nI0113 06:36:30.582561 459 log.go:181] (0x4000956280) (3) Data frame sent\nI0113 06:36:30.587479 459 log.go:181] (0x400003a420) Data frame received for 3\nI0113 06:36:30.587619 459 log.go:181] (0x4000956280) (3) Data frame handling\nI0113 06:36:30.587720 459 log.go:181] (0x4000956280) (3) Data frame sent\nI0113 06:36:30.588062 459 log.go:181] (0x400003a420) Data frame received for 5\nI0113 06:36:30.588154 459 log.go:181] (0x4000956320) (5) Data frame handling\nI0113 06:36:30.588229 459 log.go:181] (0x4000956320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.1.147:80/\nI0113 06:36:30.588300 459 log.go:181] (0x400003a420) Data frame received for 3\nI0113 06:36:30.588366 459 log.go:181] (0x4000956280) (3) Data frame handling\nI0113 06:36:30.588461 459 log.go:181] (0x4000956280) (3) Data frame sent\nI0113 06:36:30.592595 459 log.go:181] (0x400003a420) Data frame received for 3\nI0113 06:36:30.592724 459 log.go:181] (0x4000956280) (3) Data frame handling\nI0113 06:36:30.592966 459 log.go:181] (0x4000956280) (3) Data frame sent\nI0113 06:36:30.593080 459 log.go:181] (0x400003a420) Data frame received for 5\nI0113 06:36:30.593171 459 log.go:181] (0x4000956320) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.1.147:80/\nI0113 06:36:30.593311 459 log.go:181] (0x400003a420) Data frame received for 3\nI0113 06:36:30.593475 459 log.go:181] (0x4000956280) (3) Data frame handling\nI0113 06:36:30.593666 459 log.go:181] (0x4000956320) (5) Data frame sent\nI0113 06:36:30.593884 459 log.go:181] (0x4000956280) (3) Data frame sent\nI0113 06:36:30.598720 459 log.go:181] (0x400003a420) Data frame received for 3\nI0113 06:36:30.598846 459 log.go:181] (0x4000956280) (3) Data frame handling\nI0113 06:36:30.599005 459 log.go:181] (0x4000956280) (3) Data frame sent\nI0113 06:36:30.599222 459 log.go:181] (0x400003a420) Data frame received for 3\nI0113 06:36:30.599353 459 log.go:181] (0x4000956280) (3) Data frame handling\nI0113 06:36:30.599511 459 log.go:181] (0x400003a420) Data frame received for 5\nI0113 06:36:30.599718 459 log.go:181] (0x4000956320) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.1.147:80/\nI0113 06:36:30.599881 459 log.go:181] (0x4000956280) (3) Data frame sent\nI0113 06:36:30.600011 459 log.go:181] (0x4000956320) (5) Data frame sent\nI0113 06:36:30.605370 459 log.go:181] (0x400003a420) Data frame received for 3\nI0113 06:36:30.605488 459 log.go:181] (0x4000956280) (3) Data frame handling\nI0113 06:36:30.605601 459 log.go:181] (0x4000956280) (3) Data frame sent\nI0113 06:36:30.606810 459 log.go:181] (0x400003a420) Data frame received for 5\nI0113 06:36:30.606996 459 log.go:181] (0x4000956320) (5) Data frame handling\nI0113 06:36:30.607127 459 log.go:181] (0x4000956320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.1.147:80/\nI0113 06:36:30.607241 459 log.go:181] (0x400003a420) Data frame received for 3\nI0113 06:36:30.607348 459 log.go:181] (0x4000956280) (3) Data frame handling\nI0113 06:36:30.607465 459 log.go:181] (0x4000956280) (3) Data frame sent\nI0113 06:36:30.614644 459 log.go:181] (0x400003a420) Data frame received for 3\nI0113 06:36:30.614742 459 log.go:181] (0x4000956280) (3) Data frame handling\nI0113 06:36:30.614864 459 log.go:181] (0x4000956280) (3) Data frame sent\nI0113 06:36:30.615296 459 log.go:181] (0x400003a420) Data frame received for 3\nI0113 06:36:30.615396 459 log.go:181] (0x4000956280) (3) Data frame handling\nI0113 06:36:30.615493 459 log.go:181] (0x400003a420) Data frame received for 5\nI0113 06:36:30.615590 459 log.go:181] (0x4000956320) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.1.147:80/\nI0113 06:36:30.615666 459 log.go:181] (0x4000956280) (3) Data frame sent\nI0113 06:36:30.615741 459 log.go:181] (0x4000956320) (5) Data frame sent\nI0113 06:36:30.620113 459 log.go:181] (0x400003a420) Data frame received for 3\nI0113 06:36:30.620248 459 log.go:181] (0x4000956280) (3) Data frame handling\nI0113 06:36:30.620380 459 log.go:181] (0x4000956280) (3) Data frame sent\nI0113 06:36:30.621398 459 log.go:181] (0x400003a420) Data frame received for 5\nI0113 06:36:30.621496 459 log.go:181] (0x4000956320) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.1.147:80/\nI0113 06:36:30.621581 459 log.go:181] (0x400003a420) Data frame received for 3\nI0113 06:36:30.621665 459 log.go:181] (0x4000956280) (3) Data frame handling\nI0113 06:36:30.621731 459 log.go:181] (0x4000956280) (3) Data frame sent\nI0113 06:36:30.621797 459 log.go:181] (0x4000956320) (5) Data frame sent\nI0113 06:36:30.624819 459 log.go:181] (0x400003a420) Data frame received for 3\nI0113 06:36:30.624971 459 log.go:181] (0x4000956280) (3) Data frame handling\nI0113 06:36:30.625060 459 log.go:181] (0x4000956280) (3) Data frame sent\nI0113 06:36:30.625496 459 log.go:181] (0x400003a420) Data frame received for 3\nI0113 06:36:30.625582 459 log.go:181] (0x4000956280) (3) Data frame handling\nI0113 06:36:30.625676 459 log.go:181] (0x400003a420) Data frame received for 5\nI0113 06:36:30.625770 459 log.go:181] (0x4000956320) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.1.147:80/\nI0113 06:36:30.625845 459 log.go:181] (0x4000956280) (3) Data frame sent\nI0113 06:36:30.625922 459 log.go:181] (0x4000956320) (5) Data frame sent\nI0113 06:36:30.629626 459 log.go:181] (0x400003a420) Data frame received for 3\nI0113 06:36:30.629727 459 log.go:181] (0x4000956280) (3) Data frame handling\nI0113 06:36:30.629827 459 log.go:181] (0x4000956280) (3) Data frame sent\nI0113 06:36:30.630263 459 log.go:181] (0x400003a420) Data frame received for 5\nI0113 06:36:30.630343 459 log.go:181] (0x4000956320) (5) Data frame handling\nI0113 06:36:30.630426 459 log.go:181] (0x4000956320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.1.147:80/\nI0113 06:36:30.630496 459 log.go:181] (0x400003a420) Data frame received for 3\nI0113 06:36:30.630554 459 log.go:181] (0x4000956280) (3) Data frame handling\nI0113 06:36:30.630621 459 log.go:181] (0x4000956280) (3) Data frame sent\nI0113 06:36:30.634268 459 log.go:181] (0x400003a420) Data frame received for 3\nI0113 06:36:30.634354 459 log.go:181] (0x4000956280) (3) Data frame handling\nI0113 06:36:30.634457 459 log.go:181] (0x4000956280) (3) Data frame sent\nI0113 06:36:30.634935 459 log.go:181] (0x400003a420) Data frame received for 3\nI0113 06:36:30.635057 459 log.go:181] (0x400003a420) Data frame received for 5\nI0113 06:36:30.635212 459 log.go:181] (0x4000956320) (5) Data frame handling\nI0113 06:36:30.635272 459 log.go:181] (0x4000956280) (3) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.1.147:80/\nI0113 06:36:30.635347 459 log.go:181] (0x4000956280) (3) Data frame sent\nI0113 06:36:30.635416 459 log.go:181] (0x4000956320) (5) Data frame sent\nI0113 06:36:30.639934 459 log.go:181] (0x400003a420) Data frame received for 3\nI0113 06:36:30.640027 459 log.go:181] (0x4000956280) (3) Data frame handling\nI0113 06:36:30.640124 459 log.go:181] (0x4000956280) (3) Data frame sent\nI0113 06:36:30.640432 459 log.go:181] (0x400003a420) Data frame received for 5\nI0113 06:36:30.640515 459 log.go:181] (0x4000956320) (5) Data frame handling\nI0113 06:36:30.640600 459 log.go:181] (0x4000956320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.1.147:80/\nI0113 06:36:30.640676 459 log.go:181] (0x400003a420) Data frame received for 3\nI0113 06:36:30.640739 459 log.go:181] (0x4000956280) (3) Data frame handling\nI0113 06:36:30.640811 459 log.go:181] (0x4000956280) (3) Data frame sent\nI0113 06:36:30.644525 459 log.go:181] (0x400003a420) Data frame received for 3\nI0113 06:36:30.644585 459 log.go:181] (0x4000956280) (3) Data frame handling\nI0113 06:36:30.644685 459 log.go:181] (0x4000956280) (3) Data frame sent\nI0113 06:36:30.645109 459 log.go:181] (0x400003a420) Data frame received for 5\nI0113 06:36:30.645200 459 log.go:181] (0x4000956320) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.1.147:80/\nI0113 06:36:30.645292 459 log.go:181] (0x400003a420) Data frame received for 3\nI0113 06:36:30.645399 459 log.go:181] (0x4000956280) (3) Data frame handling\nI0113 06:36:30.645475 459 log.go:181] (0x4000956320) (5) Data frame sent\nI0113 06:36:30.645554 459 log.go:181] (0x4000956280) (3) Data frame sent\nI0113 06:36:30.649185 459 log.go:181] (0x400003a420) Data frame received for 3\nI0113 06:36:30.649286 459 log.go:181] (0x4000956280) (3) Data frame handling\nI0113 06:36:30.649375 459 log.go:181] (0x4000956280) (3) Data frame sent\nI0113 06:36:30.649695 459 log.go:181] (0x400003a420) Data frame received for 5\nI0113 06:36:30.649776 459 log.go:181] (0x4000956320) (5) Data frame handling\nI0113 06:36:30.649846 459 log.go:181] (0x4000956320) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.1.147:80/\nI0113 06:36:30.649959 459 log.go:181] (0x400003a420) Data frame received for 3\nI0113 06:36:30.650025 459 log.go:181] (0x4000956280) (3) Data frame handling\nI0113 06:36:30.650091 459 log.go:181] (0x4000956280) (3) Data frame sent\nI0113 06:36:30.654030 459 log.go:181] (0x400003a420) Data frame received for 3\nI0113 06:36:30.654126 459 log.go:181] (0x4000956280) (3) Data frame handling\nI0113 06:36:30.654234 459 log.go:181] (0x4000956280) (3) Data frame sent\nI0113 06:36:30.655079 459 log.go:181] (0x400003a420) Data frame received for 5\nI0113 06:36:30.655229 459 log.go:181] (0x4000956320) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.1.147:80/\nI0113 06:36:30.655396 459 log.go:181] (0x400003a420) Data frame received for 3\nI0113 06:36:30.655543 459 log.go:181] (0x4000956280) (3) Data frame handling\nI0113 06:36:30.655664 459 log.go:181] (0x4000956320) (5) Data frame sent\nI0113 06:36:30.655825 459 log.go:181] (0x4000956280) (3) Data frame sent\nI0113 06:36:30.661278 459 log.go:181] (0x400003a420) Data frame received for 3\nI0113 06:36:30.661345 459 log.go:181] (0x4000956280) (3) Data frame handling\nI0113 06:36:30.661419 459 log.go:181] (0x4000956280) (3) Data frame sent\nI0113 06:36:30.662409 459 log.go:181] (0x400003a420) Data frame received for 5\nI0113 06:36:30.662505 459 log.go:181] (0x4000956320) (5) Data frame handling\nI0113 06:36:30.662608 459 log.go:181] (0x400003a420) Data frame received for 3\nI0113 06:36:30.662713 459 log.go:181] (0x4000956280) (3) Data frame handling\nI0113 06:36:30.664279 459 log.go:181] (0x400003a420) Data frame received for 1\nI0113 06:36:30.664372 459 log.go:181] (0x4000c761e0) (1) Data frame handling\nI0113 06:36:30.664469 459 log.go:181] (0x4000c761e0) (1) Data frame sent\nI0113 06:36:30.665487 459 log.go:181] (0x400003a420) (0x4000c761e0) Stream removed, broadcasting: 1\nI0113 06:36:30.667996 459 log.go:181] (0x400003a420) Go away received\nI0113 06:36:30.671745 459 log.go:181] (0x400003a420) (0x4000c761e0) Stream removed, broadcasting: 1\nI0113 06:36:30.672019 459 log.go:181] (0x400003a420) (0x4000956280) Stream removed, broadcasting: 3\nI0113 06:36:30.672192 459 log.go:181] (0x400003a420) (0x4000956320) Stream removed, broadcasting: 5\n" Jan 13 06:36:30.686: INFO: stdout: "\naffinity-clusterip-transition-bvdrg\naffinity-clusterip-transition-bvdrg\naffinity-clusterip-transition-tkbv6\naffinity-clusterip-transition-tkbv6\naffinity-clusterip-transition-tkbv6\naffinity-clusterip-transition-tkbv6\naffinity-clusterip-transition-8m2nk\naffinity-clusterip-transition-bvdrg\naffinity-clusterip-transition-tkbv6\naffinity-clusterip-transition-bvdrg\naffinity-clusterip-transition-bvdrg\naffinity-clusterip-transition-8m2nk\naffinity-clusterip-transition-bvdrg\naffinity-clusterip-transition-8m2nk\naffinity-clusterip-transition-tkbv6\naffinity-clusterip-transition-bvdrg" Jan 13 06:36:30.687: INFO: Received response from host: affinity-clusterip-transition-bvdrg Jan 13 06:36:30.687: INFO: Received response from host: affinity-clusterip-transition-bvdrg Jan 13 06:36:30.687: INFO: Received response from host: affinity-clusterip-transition-tkbv6 Jan 13 06:36:30.687: INFO: Received response from host: affinity-clusterip-transition-tkbv6 Jan 13 06:36:30.687: INFO: Received response from host: affinity-clusterip-transition-tkbv6 Jan 13 06:36:30.687: INFO: Received response from host: affinity-clusterip-transition-tkbv6 Jan 13 06:36:30.687: INFO: Received response from host: affinity-clusterip-transition-8m2nk Jan 13 06:36:30.687: INFO: Received response from host: affinity-clusterip-transition-bvdrg Jan 13 06:36:30.687: INFO: Received response from host: affinity-clusterip-transition-tkbv6 Jan 13 06:36:30.687: INFO: Received response from host: affinity-clusterip-transition-bvdrg Jan 13 06:36:30.687: INFO: Received response from host: affinity-clusterip-transition-bvdrg Jan 13 06:36:30.687: INFO: Received response from host: affinity-clusterip-transition-8m2nk Jan 13 06:36:30.687: INFO: Received response from host: affinity-clusterip-transition-bvdrg Jan 13 06:36:30.687: INFO: Received response from host: affinity-clusterip-transition-8m2nk Jan 13 06:36:30.687: INFO: Received response from host: affinity-clusterip-transition-tkbv6 Jan 13 06:36:30.687: INFO: Received response from host: affinity-clusterip-transition-bvdrg Jan 13 06:36:30.703: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-7286 exec execpod-affinityckr8z -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.96.1.147:80/ ; done' Jan 13 06:36:32.417: INFO: stderr: "I0113 06:36:32.227084 479 log.go:181] (0x40002e6000) (0x400012d180) Create stream\nI0113 06:36:32.232199 479 log.go:181] (0x40002e6000) (0x400012d180) Stream added, broadcasting: 1\nI0113 06:36:32.243690 479 log.go:181] (0x40002e6000) Reply frame received for 1\nI0113 06:36:32.244939 479 log.go:181] (0x40002e6000) (0x400071e280) Create stream\nI0113 06:36:32.245050 479 log.go:181] (0x40002e6000) (0x400071e280) Stream added, broadcasting: 3\nI0113 06:36:32.246723 479 log.go:181] (0x40002e6000) Reply frame received for 3\nI0113 06:36:32.247026 479 log.go:181] (0x40002e6000) (0x4000552000) Create stream\nI0113 06:36:32.247108 479 log.go:181] (0x40002e6000) (0x4000552000) Stream added, broadcasting: 5\nI0113 06:36:32.248589 479 log.go:181] (0x40002e6000) Reply frame received for 5\nI0113 06:36:32.310950 479 log.go:181] (0x40002e6000) Data frame received for 5\nI0113 06:36:32.311326 479 log.go:181] (0x40002e6000) Data frame received for 3\nI0113 06:36:32.311473 479 log.go:181] (0x400071e280) (3) Data frame handling\nI0113 06:36:32.311585 479 log.go:181] (0x4000552000) (5) Data frame handling\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.1.147:80/\nI0113 06:36:32.312539 479 log.go:181] (0x4000552000) (5) Data frame sent\nI0113 06:36:32.312761 479 log.go:181] (0x400071e280) (3) Data frame sent\nI0113 06:36:32.314000 479 log.go:181] (0x40002e6000) Data frame received for 3\nI0113 06:36:32.314093 479 log.go:181] (0x400071e280) (3) Data frame handling\nI0113 06:36:32.314192 479 log.go:181] (0x400071e280) (3) Data frame sent\nI0113 06:36:32.314276 479 log.go:181] (0x40002e6000) Data frame received for 3\nI0113 06:36:32.314352 479 log.go:181] (0x40002e6000) Data frame received for 5\nI0113 06:36:32.314482 479 log.go:181] (0x4000552000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.1.147:80/\nI0113 06:36:32.314579 479 log.go:181] (0x400071e280) (3) Data frame handling\nI0113 06:36:32.314705 479 log.go:181] (0x4000552000) (5) Data frame sent\nI0113 06:36:32.314785 479 log.go:181] (0x400071e280) (3) Data frame sent\nI0113 06:36:32.318581 479 log.go:181] (0x40002e6000) Data frame received for 3\nI0113 06:36:32.318682 479 log.go:181] (0x400071e280) (3) Data frame handling\nI0113 06:36:32.318812 479 log.go:181] (0x400071e280) (3) Data frame sent\nI0113 06:36:32.319017 479 log.go:181] (0x40002e6000) Data frame received for 5\nI0113 06:36:32.319108 479 log.go:181] (0x4000552000) (5) Data frame handling\nI0113 06:36:32.319222 479 log.go:181] (0x4000552000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.1.147:80/\nI0113 06:36:32.319323 479 log.go:181] (0x40002e6000) Data frame received for 3\nI0113 06:36:32.319417 479 log.go:181] (0x400071e280) (3) Data frame handling\nI0113 06:36:32.319495 479 log.go:181] (0x400071e280) (3) Data frame sent\nI0113 06:36:32.323087 479 log.go:181] (0x40002e6000) Data frame received for 3\nI0113 06:36:32.323175 479 log.go:181] (0x400071e280) (3) Data frame handling\nI0113 06:36:32.323314 479 log.go:181] (0x400071e280) (3) Data frame sent\nI0113 06:36:32.323920 479 log.go:181] (0x40002e6000) Data frame received for 5\nI0113 06:36:32.324076 479 log.go:181] (0x40002e6000) Data frame received for 3\nI0113 06:36:32.324238 479 log.go:181] (0x400071e280) (3) Data frame handling\nI0113 06:36:32.324336 479 log.go:181] (0x4000552000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.1.147:80/\nI0113 06:36:32.324444 479 log.go:181] (0x400071e280) (3) Data frame sent\nI0113 06:36:32.324538 479 log.go:181] (0x4000552000) (5) Data frame sent\nI0113 06:36:32.327011 479 log.go:181] (0x40002e6000) Data frame received for 3\nI0113 06:36:32.327146 479 log.go:181] (0x400071e280) (3) Data frame handling\nI0113 06:36:32.327315 479 log.go:181] (0x400071e280) (3) Data frame sent\nI0113 06:36:32.327517 479 log.go:181] (0x40002e6000) Data frame received for 3\nI0113 06:36:32.327587 479 log.go:181] (0x400071e280) (3) Data frame handling\nI0113 06:36:32.327677 479 log.go:181] (0x40002e6000) Data frame received for 5\nI0113 06:36:32.327811 479 log.go:181] (0x4000552000) (5) Data frame handling\nI0113 06:36:32.327938 479 log.go:181] (0x4000552000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.1.147:80/\nI0113 06:36:32.328040 479 log.go:181] (0x400071e280) (3) Data frame sent\nI0113 06:36:32.333180 479 log.go:181] (0x40002e6000) Data frame received for 3\nI0113 06:36:32.333301 479 log.go:181] (0x400071e280) (3) Data frame handling\nI0113 06:36:32.333413 479 log.go:181] (0x400071e280) (3) Data frame sent\nI0113 06:36:32.333755 479 log.go:181] (0x40002e6000) Data frame received for 5\nI0113 06:36:32.333868 479 log.go:181] (0x4000552000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.1.147:80/\nI0113 06:36:32.333964 479 log.go:181] (0x40002e6000) Data frame received for 3\nI0113 06:36:32.334108 479 log.go:181] (0x400071e280) (3) Data frame handling\nI0113 06:36:32.334240 479 log.go:181] (0x400071e280) (3) Data frame sent\nI0113 06:36:32.334372 479 log.go:181] (0x4000552000) (5) Data frame sent\nI0113 06:36:32.339162 479 log.go:181] (0x40002e6000) Data frame received for 3\nI0113 06:36:32.339266 479 log.go:181] (0x400071e280) (3) Data frame handling\nI0113 06:36:32.339361 479 log.go:181] (0x400071e280) (3) Data frame sent\nI0113 06:36:32.339505 479 log.go:181] (0x40002e6000) Data frame received for 3\nI0113 06:36:32.339630 479 log.go:181] (0x400071e280) (3) Data frame handling\nI0113 06:36:32.339732 479 log.go:181] (0x40002e6000) Data frame received for 5\nI0113 06:36:32.339850 479 log.go:181] (0x4000552000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.1.147:80/\nI0113 06:36:32.339954 479 log.go:181] (0x400071e280) (3) Data frame sent\nI0113 06:36:32.340075 479 log.go:181] (0x4000552000) (5) Data frame sent\nI0113 06:36:32.343678 479 log.go:181] (0x40002e6000) Data frame received for 3\nI0113 06:36:32.343776 479 log.go:181] (0x400071e280) (3) Data frame handling\nI0113 06:36:32.343947 479 log.go:181] (0x400071e280) (3) Data frame sent\nI0113 06:36:32.344130 479 log.go:181] (0x40002e6000) Data frame received for 5\nI0113 06:36:32.344296 479 log.go:181] (0x4000552000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.1.147:80/\nI0113 06:36:32.344422 479 log.go:181] (0x40002e6000) Data frame received for 3\nI0113 06:36:32.344532 479 log.go:181] (0x400071e280) (3) Data frame handling\nI0113 06:36:32.344621 479 log.go:181] (0x4000552000) (5) Data frame sent\nI0113 06:36:32.344731 479 log.go:181] (0x400071e280) (3) Data frame sent\nI0113 06:36:32.350074 479 log.go:181] (0x40002e6000) Data frame received for 3\nI0113 06:36:32.350193 479 log.go:181] (0x400071e280) (3) Data frame handling\nI0113 06:36:32.350378 479 log.go:181] (0x400071e280) (3) Data frame sent\nI0113 06:36:32.350589 479 log.go:181] (0x40002e6000) Data frame received for 3\nI0113 06:36:32.350732 479 log.go:181] (0x400071e280) (3) Data frame handling\nI0113 06:36:32.350886 479 log.go:181] (0x40002e6000) Data frame received for 5\nI0113 06:36:32.351094 479 log.go:181] (0x4000552000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.1.147:80/\nI0113 06:36:32.351258 479 log.go:181] (0x400071e280) (3) Data frame sent\nI0113 06:36:32.351410 479 log.go:181] (0x4000552000) (5) Data frame sent\nI0113 06:36:32.357390 479 log.go:181] (0x40002e6000) Data frame received for 3\nI0113 06:36:32.357530 479 log.go:181] (0x400071e280) (3) Data frame handling\nI0113 06:36:32.357660 479 log.go:181] (0x400071e280) (3) Data frame sent\nI0113 06:36:32.358127 479 log.go:181] (0x40002e6000) Data frame received for 5\nI0113 06:36:32.358204 479 log.go:181] (0x4000552000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.1.147:80/I0113 06:36:32.358273 479 log.go:181] (0x40002e6000) Data frame received for 3\nI0113 06:36:32.358377 479 log.go:181] (0x400071e280) (3) Data frame handling\nI0113 06:36:32.358444 479 log.go:181] (0x400071e280) (3) Data frame sent\nI0113 06:36:32.358510 479 log.go:181] (0x4000552000) (5) Data frame sent\nI0113 06:36:32.358577 479 log.go:181] (0x40002e6000) Data frame received for 5\nI0113 06:36:32.358652 479 log.go:181] (0x4000552000) (5) Data frame handling\n\nI0113 06:36:32.358776 479 log.go:181] (0x4000552000) (5) Data frame sent\nI0113 06:36:32.363882 479 log.go:181] (0x40002e6000) Data frame received for 3\nI0113 06:36:32.364004 479 log.go:181] (0x400071e280) (3) Data frame handling\nI0113 06:36:32.364186 479 log.go:181] (0x400071e280) (3) Data frame sent\nI0113 06:36:32.364598 479 log.go:181] (0x40002e6000) Data frame received for 5\nI0113 06:36:32.364730 479 log.go:181] (0x4000552000) (5) Data frame handling\n+ echo\n+ curl -q -sI0113 06:36:32.364818 479 log.go:181] (0x40002e6000) Data frame received for 3\nI0113 06:36:32.364994 479 log.go:181] (0x400071e280) (3) Data frame handling\nI0113 06:36:32.365068 479 log.go:181] (0x400071e280) (3) Data frame sent\nI0113 06:36:32.365143 479 log.go:181] (0x4000552000) (5) Data frame sent\nI0113 06:36:32.365207 479 log.go:181] (0x40002e6000) Data frame received for 5\nI0113 06:36:32.365264 479 log.go:181] (0x4000552000) (5) Data frame handling\nI0113 06:36:32.365338 479 log.go:181] (0x4000552000) (5) Data frame sent\n --connect-timeout 2 http://10.96.1.147:80/\nI0113 06:36:32.370996 479 log.go:181] (0x40002e6000) Data frame received for 3\nI0113 06:36:32.371087 479 log.go:181] (0x400071e280) (3) Data frame handling\nI0113 06:36:32.371167 479 log.go:181] (0x400071e280) (3) Data frame sent\nI0113 06:36:32.371741 479 log.go:181] (0x40002e6000) Data frame received for 3\nI0113 06:36:32.371872 479 log.go:181] (0x400071e280) (3) Data frame handling\nI0113 06:36:32.371997 479 log.go:181] (0x400071e280) (3) Data frame sent\nI0113 06:36:32.372094 479 log.go:181] (0x40002e6000) Data frame received for 5\nI0113 06:36:32.372188 479 log.go:181] (0x4000552000) (5) Data frame handling\nI0113 06:36:32.372301 479 log.go:181] (0x4000552000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.1.147:80/\nI0113 06:36:32.375858 479 log.go:181] (0x40002e6000) Data frame received for 3\nI0113 06:36:32.375959 479 log.go:181] (0x400071e280) (3) Data frame handling\nI0113 06:36:32.376099 479 log.go:181] (0x400071e280) (3) Data frame sent\nI0113 06:36:32.376508 479 log.go:181] (0x40002e6000) Data frame received for 5\nI0113 06:36:32.376627 479 log.go:181] (0x4000552000) (5) Data frame handling\nI0113 06:36:32.376741 479 log.go:181] (0x4000552000) (5) Data frame sent\n+ echo\n+ curl -q -sI0113 06:36:32.376920 479 log.go:181] (0x40002e6000) Data frame received for 5\nI0113 06:36:32.377254 479 log.go:181] (0x4000552000) (5) Data frame handling\n --connect-timeout 2 http://10.96.1.147:80/\nI0113 06:36:32.377334 479 log.go:181] (0x40002e6000) Data frame received for 3\nI0113 06:36:32.377416 479 log.go:181] (0x400071e280) (3) Data frame handling\nI0113 06:36:32.377480 479 log.go:181] (0x400071e280) (3) Data frame sent\nI0113 06:36:32.377547 479 log.go:181] (0x4000552000) (5) Data frame sent\nI0113 06:36:32.380502 479 log.go:181] (0x40002e6000) Data frame received for 3\nI0113 06:36:32.380676 479 log.go:181] (0x400071e280) (3) Data frame handling\nI0113 06:36:32.381040 479 log.go:181] (0x400071e280) (3) Data frame sent\nI0113 06:36:32.382018 479 log.go:181] (0x40002e6000) Data frame received for 5\nI0113 06:36:32.382132 479 log.go:181] (0x4000552000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.1.147:80/\nI0113 06:36:32.382212 479 log.go:181] (0x40002e6000) Data frame received for 3\nI0113 06:36:32.382340 479 log.go:181] (0x400071e280) (3) Data frame handling\nI0113 06:36:32.382422 479 log.go:181] (0x4000552000) (5) Data frame sent\nI0113 06:36:32.382522 479 log.go:181] (0x400071e280) (3) Data frame sent\nI0113 06:36:32.385461 479 log.go:181] (0x40002e6000) Data frame received for 3\nI0113 06:36:32.385550 479 log.go:181] (0x400071e280) (3) Data frame handling\nI0113 06:36:32.385656 479 log.go:181] (0x400071e280) (3) Data frame sent\nI0113 06:36:32.386376 479 log.go:181] (0x40002e6000) Data frame received for 5\nI0113 06:36:32.386516 479 log.go:181] (0x4000552000) (5) Data frame handling\nI0113 06:36:32.386637 479 log.go:181] (0x4000552000) (5) Data frame sent\nI0113 06:36:32.386749 479 log.go:181] (0x40002e6000) Data frame received for 3\nI0113 06:36:32.386869 479 log.go:181] (0x400071e280) (3) Data frame handling\nI0113 06:36:32.386982 479 log.go:181] (0x400071e280) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.1.147:80/\nI0113 06:36:32.390683 479 log.go:181] (0x40002e6000) Data frame received for 3\nI0113 06:36:32.390776 479 log.go:181] (0x400071e280) (3) Data frame handling\nI0113 06:36:32.390880 479 log.go:181] (0x400071e280) (3) Data frame sent\nI0113 06:36:32.391379 479 log.go:181] (0x40002e6000) Data frame received for 5\nI0113 06:36:32.391483 479 log.go:181] (0x4000552000) (5) Data frame handling\nI0113 06:36:32.391566 479 log.go:181] (0x4000552000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.1.147:80/\nI0113 06:36:32.391656 479 log.go:181] (0x40002e6000) Data frame received for 3\nI0113 06:36:32.391746 479 log.go:181] (0x400071e280) (3) Data frame handling\nI0113 06:36:32.391840 479 log.go:181] (0x400071e280) (3) Data frame sent\nI0113 06:36:32.397300 479 log.go:181] (0x40002e6000) Data frame received for 3\nI0113 06:36:32.397396 479 log.go:181] (0x400071e280) (3) Data frame handling\nI0113 06:36:32.397494 479 log.go:181] (0x400071e280) (3) Data frame sent\nI0113 06:36:32.397728 479 log.go:181] (0x40002e6000) Data frame received for 3\nI0113 06:36:32.397819 479 log.go:181] (0x400071e280) (3) Data frame handling\nI0113 06:36:32.398471 479 log.go:181] (0x40002e6000) Data frame received for 5\nI0113 06:36:32.398557 479 log.go:181] (0x4000552000) (5) Data frame handling\nI0113 06:36:32.399549 479 log.go:181] (0x40002e6000) Data frame received for 1\nI0113 06:36:32.399638 479 log.go:181] (0x400012d180) (1) Data frame handling\nI0113 06:36:32.399729 479 log.go:181] (0x400012d180) (1) Data frame sent\nI0113 06:36:32.400591 479 log.go:181] (0x40002e6000) (0x400012d180) Stream removed, broadcasting: 1\nI0113 06:36:32.404209 479 log.go:181] (0x40002e6000) Go away received\nI0113 06:36:32.407196 479 log.go:181] (0x40002e6000) (0x400012d180) Stream removed, broadcasting: 1\nI0113 06:36:32.407615 479 log.go:181] (0x40002e6000) (0x400071e280) Stream removed, broadcasting: 3\nI0113 06:36:32.407888 479 log.go:181] (0x40002e6000) (0x4000552000) Stream removed, broadcasting: 5\n" Jan 13 06:36:32.423: INFO: stdout: "\naffinity-clusterip-transition-tkbv6\naffinity-clusterip-transition-tkbv6\naffinity-clusterip-transition-tkbv6\naffinity-clusterip-transition-tkbv6\naffinity-clusterip-transition-tkbv6\naffinity-clusterip-transition-tkbv6\naffinity-clusterip-transition-tkbv6\naffinity-clusterip-transition-tkbv6\naffinity-clusterip-transition-tkbv6\naffinity-clusterip-transition-tkbv6\naffinity-clusterip-transition-tkbv6\naffinity-clusterip-transition-tkbv6\naffinity-clusterip-transition-tkbv6\naffinity-clusterip-transition-tkbv6\naffinity-clusterip-transition-tkbv6\naffinity-clusterip-transition-tkbv6" Jan 13 06:36:32.423: INFO: Received response from host: affinity-clusterip-transition-tkbv6 Jan 13 06:36:32.424: INFO: Received response from host: affinity-clusterip-transition-tkbv6 Jan 13 06:36:32.424: INFO: Received response from host: affinity-clusterip-transition-tkbv6 Jan 13 06:36:32.424: INFO: Received response from host: affinity-clusterip-transition-tkbv6 Jan 13 06:36:32.424: INFO: Received response from host: affinity-clusterip-transition-tkbv6 Jan 13 06:36:32.424: INFO: Received response from host: affinity-clusterip-transition-tkbv6 Jan 13 06:36:32.424: INFO: Received response from host: affinity-clusterip-transition-tkbv6 Jan 13 06:36:32.424: INFO: Received response from host: affinity-clusterip-transition-tkbv6 Jan 13 06:36:32.424: INFO: Received response from host: affinity-clusterip-transition-tkbv6 Jan 13 06:36:32.424: INFO: Received response from host: affinity-clusterip-transition-tkbv6 Jan 13 06:36:32.424: INFO: Received response from host: affinity-clusterip-transition-tkbv6 Jan 13 06:36:32.424: INFO: Received response from host: affinity-clusterip-transition-tkbv6 Jan 13 06:36:32.424: INFO: Received response from host: affinity-clusterip-transition-tkbv6 Jan 13 06:36:32.424: INFO: Received response from host: affinity-clusterip-transition-tkbv6 Jan 13 06:36:32.424: INFO: Received response from host: affinity-clusterip-transition-tkbv6 Jan 13 06:36:32.424: INFO: Received response from host: affinity-clusterip-transition-tkbv6 Jan 13 06:36:32.425: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-7286, will wait for the garbage collector to delete the pods Jan 13 06:36:32.521: INFO: Deleting ReplicationController affinity-clusterip-transition took: 16.103998ms Jan 13 06:36:32.622: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 100.783438ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:37:20.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7286" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:68.841 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":309,"completed":43,"skipped":805,"failed":0} S ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:37:20.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should find a service from listing all namespaces [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: fetching services [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:37:20.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5631" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":309,"completed":44,"skipped":806,"failed":0} ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:37:20.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir 0666 on node default medium Jan 13 06:37:20.554: INFO: Waiting up to 5m0s for pod "pod-26cb8325-c02f-4252-8e5e-3bfeb632d964" in namespace "emptydir-7293" to be "Succeeded or Failed" Jan 13 06:37:20.560: INFO: Pod "pod-26cb8325-c02f-4252-8e5e-3bfeb632d964": Phase="Pending", Reason="", readiness=false. Elapsed: 5.553425ms Jan 13 06:37:22.567: INFO: Pod "pod-26cb8325-c02f-4252-8e5e-3bfeb632d964": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012388778s Jan 13 06:37:24.575: INFO: Pod "pod-26cb8325-c02f-4252-8e5e-3bfeb632d964": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020703617s STEP: Saw pod success Jan 13 06:37:24.575: INFO: Pod "pod-26cb8325-c02f-4252-8e5e-3bfeb632d964" satisfied condition "Succeeded or Failed" Jan 13 06:37:24.580: INFO: Trying to get logs from node leguer-worker2 pod pod-26cb8325-c02f-4252-8e5e-3bfeb632d964 container test-container: STEP: delete the pod Jan 13 06:37:24.714: INFO: Waiting for pod pod-26cb8325-c02f-4252-8e5e-3bfeb632d964 to disappear Jan 13 06:37:24.733: INFO: Pod pod-26cb8325-c02f-4252-8e5e-3bfeb632d964 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:37:24.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7293" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":45,"skipped":806,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:37:24.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating projection with secret that has name projected-secret-test-98ef7bcd-08c3-4ce8-a4f9-1c68f246be72 STEP: Creating a pod to test consume secrets Jan 13 06:37:24.862: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-bf9dc25e-8f9f-4564-9744-62a1db9aed9c" in namespace "projected-6551" to be "Succeeded or Failed" Jan 13 06:37:24.877: INFO: Pod "pod-projected-secrets-bf9dc25e-8f9f-4564-9744-62a1db9aed9c": Phase="Pending", Reason="", readiness=false. Elapsed: 14.009482ms Jan 13 06:37:27.077: INFO: Pod "pod-projected-secrets-bf9dc25e-8f9f-4564-9744-62a1db9aed9c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.214770246s Jan 13 06:37:29.088: INFO: Pod "pod-projected-secrets-bf9dc25e-8f9f-4564-9744-62a1db9aed9c": Phase="Running", Reason="", readiness=true. Elapsed: 4.22556043s Jan 13 06:37:31.097: INFO: Pod "pod-projected-secrets-bf9dc25e-8f9f-4564-9744-62a1db9aed9c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.234391758s STEP: Saw pod success Jan 13 06:37:31.097: INFO: Pod "pod-projected-secrets-bf9dc25e-8f9f-4564-9744-62a1db9aed9c" satisfied condition "Succeeded or Failed" Jan 13 06:37:31.103: INFO: Trying to get logs from node leguer-worker2 pod pod-projected-secrets-bf9dc25e-8f9f-4564-9744-62a1db9aed9c container projected-secret-volume-test: STEP: delete the pod Jan 13 06:37:31.168: INFO: Waiting for pod pod-projected-secrets-bf9dc25e-8f9f-4564-9744-62a1db9aed9c to disappear Jan 13 06:37:31.172: INFO: Pod pod-projected-secrets-bf9dc25e-8f9f-4564-9744-62a1db9aed9c no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:37:31.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6551" for this suite. • [SLOW TEST:6.442 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":309,"completed":46,"skipped":815,"failed":0} SSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:37:31.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jan 13 06:37:45.377: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3186 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 06:37:45.377: INFO: >>> kubeConfig: /root/.kube/config I0113 06:37:45.437069 10 log.go:181] (0x40008188f0) (0x4002e05180) Create stream I0113 06:37:45.437258 10 log.go:181] (0x40008188f0) (0x4002e05180) Stream added, broadcasting: 1 I0113 06:37:45.441271 10 log.go:181] (0x40008188f0) Reply frame received for 1 I0113 06:37:45.441474 10 log.go:181] (0x40008188f0) (0x40027d5360) Create stream I0113 06:37:45.441578 10 log.go:181] (0x40008188f0) (0x40027d5360) Stream added, broadcasting: 3 I0113 06:37:45.443323 10 log.go:181] (0x40008188f0) Reply frame received for 3 I0113 06:37:45.443495 10 log.go:181] (0x40008188f0) (0x400114d180) Create stream I0113 06:37:45.443590 10 log.go:181] (0x40008188f0) (0x400114d180) Stream added, broadcasting: 5 I0113 06:37:45.445147 10 log.go:181] (0x40008188f0) Reply frame received for 5 I0113 06:37:45.533416 10 log.go:181] (0x40008188f0) Data frame received for 3 I0113 06:37:45.533594 10 log.go:181] (0x40027d5360) (3) Data frame handling I0113 06:37:45.533684 10 log.go:181] (0x40027d5360) (3) Data frame sent I0113 06:37:45.533766 10 log.go:181] (0x40008188f0) Data frame received for 3 I0113 06:37:45.533822 10 log.go:181] (0x40027d5360) (3) Data frame handling I0113 06:37:45.533961 10 log.go:181] (0x40008188f0) Data frame received for 5 I0113 06:37:45.534112 10 log.go:181] (0x400114d180) (5) Data frame handling I0113 06:37:45.535078 10 log.go:181] (0x40008188f0) Data frame received for 1 I0113 06:37:45.535191 10 log.go:181] (0x4002e05180) (1) Data frame handling I0113 06:37:45.535299 10 log.go:181] (0x4002e05180) (1) Data frame sent I0113 06:37:45.535413 10 log.go:181] (0x40008188f0) (0x4002e05180) Stream removed, broadcasting: 1 I0113 06:37:45.535548 10 log.go:181] (0x40008188f0) Go away received I0113 06:37:45.535890 10 log.go:181] (0x40008188f0) (0x4002e05180) Stream removed, broadcasting: 1 I0113 06:37:45.536078 10 log.go:181] (0x40008188f0) (0x40027d5360) Stream removed, broadcasting: 3 I0113 06:37:45.536243 10 log.go:181] (0x40008188f0) (0x400114d180) Stream removed, broadcasting: 5 Jan 13 06:37:45.536: INFO: Exec stderr: "" Jan 13 06:37:45.537: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3186 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 06:37:45.537: INFO: >>> kubeConfig: /root/.kube/config I0113 06:37:45.594801 10 log.go:181] (0x40016f0370) (0x40009d2aa0) Create stream I0113 06:37:45.594971 10 log.go:181] (0x40016f0370) (0x40009d2aa0) Stream added, broadcasting: 1 I0113 06:37:45.599060 10 log.go:181] (0x40016f0370) Reply frame received for 1 I0113 06:37:45.599397 10 log.go:181] (0x40016f0370) (0x40009d2d20) Create stream I0113 06:37:45.599518 10 log.go:181] (0x40016f0370) (0x40009d2d20) Stream added, broadcasting: 3 I0113 06:37:45.601401 10 log.go:181] (0x40016f0370) Reply frame received for 3 I0113 06:37:45.601614 10 log.go:181] (0x40016f0370) (0x40008d03c0) Create stream I0113 06:37:45.601732 10 log.go:181] (0x40016f0370) (0x40008d03c0) Stream added, broadcasting: 5 I0113 06:37:45.603359 10 log.go:181] (0x40016f0370) Reply frame received for 5 I0113 06:37:45.678520 10 log.go:181] (0x40016f0370) Data frame received for 5 I0113 06:37:45.678759 10 log.go:181] (0x40008d03c0) (5) Data frame handling I0113 06:37:45.678977 10 log.go:181] (0x40016f0370) Data frame received for 3 I0113 06:37:45.679120 10 log.go:181] (0x40009d2d20) (3) Data frame handling I0113 06:37:45.679270 10 log.go:181] (0x40009d2d20) (3) Data frame sent I0113 06:37:45.679427 10 log.go:181] (0x40016f0370) Data frame received for 3 I0113 06:37:45.679647 10 log.go:181] (0x40009d2d20) (3) Data frame handling I0113 06:37:45.679884 10 log.go:181] (0x40016f0370) Data frame received for 1 I0113 06:37:45.680032 10 log.go:181] (0x40009d2aa0) (1) Data frame handling I0113 06:37:45.680180 10 log.go:181] (0x40009d2aa0) (1) Data frame sent I0113 06:37:45.680292 10 log.go:181] (0x40016f0370) (0x40009d2aa0) Stream removed, broadcasting: 1 I0113 06:37:45.680418 10 log.go:181] (0x40016f0370) Go away received I0113 06:37:45.680677 10 log.go:181] (0x40016f0370) (0x40009d2aa0) Stream removed, broadcasting: 1 I0113 06:37:45.680959 10 log.go:181] (0x40016f0370) (0x40009d2d20) Stream removed, broadcasting: 3 I0113 06:37:45.681113 10 log.go:181] (0x40016f0370) (0x40008d03c0) Stream removed, broadcasting: 5 Jan 13 06:37:45.681: INFO: Exec stderr: "" Jan 13 06:37:45.681: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3186 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 06:37:45.681: INFO: >>> kubeConfig: /root/.kube/config I0113 06:37:45.741999 10 log.go:181] (0x40016f06e0) (0x40010c55e0) Create stream I0113 06:37:45.742183 10 log.go:181] (0x40016f06e0) (0x40010c55e0) Stream added, broadcasting: 1 I0113 06:37:45.746060 10 log.go:181] (0x40016f06e0) Reply frame received for 1 I0113 06:37:45.746231 10 log.go:181] (0x40016f06e0) (0x4002e052c0) Create stream I0113 06:37:45.746299 10 log.go:181] (0x40016f06e0) (0x4002e052c0) Stream added, broadcasting: 3 I0113 06:37:45.747586 10 log.go:181] (0x40016f06e0) Reply frame received for 3 I0113 06:37:45.747716 10 log.go:181] (0x40016f06e0) (0x40010c5680) Create stream I0113 06:37:45.747805 10 log.go:181] (0x40016f06e0) (0x40010c5680) Stream added, broadcasting: 5 I0113 06:37:45.749246 10 log.go:181] (0x40016f06e0) Reply frame received for 5 I0113 06:37:45.818182 10 log.go:181] (0x40016f06e0) Data frame received for 3 I0113 06:37:45.818363 10 log.go:181] (0x4002e052c0) (3) Data frame handling I0113 06:37:45.818458 10 log.go:181] (0x40016f06e0) Data frame received for 5 I0113 06:37:45.818578 10 log.go:181] (0x40010c5680) (5) Data frame handling I0113 06:37:45.818716 10 log.go:181] (0x4002e052c0) (3) Data frame sent I0113 06:37:45.818881 10 log.go:181] (0x40016f06e0) Data frame received for 3 I0113 06:37:45.818946 10 log.go:181] (0x4002e052c0) (3) Data frame handling I0113 06:37:45.819200 10 log.go:181] (0x40016f06e0) Data frame received for 1 I0113 06:37:45.819345 10 log.go:181] (0x40010c55e0) (1) Data frame handling I0113 06:37:45.819509 10 log.go:181] (0x40010c55e0) (1) Data frame sent I0113 06:37:45.819651 10 log.go:181] (0x40016f06e0) (0x40010c55e0) Stream removed, broadcasting: 1 I0113 06:37:45.819805 10 log.go:181] (0x40016f06e0) Go away received I0113 06:37:45.820022 10 log.go:181] (0x40016f06e0) (0x40010c55e0) Stream removed, broadcasting: 1 I0113 06:37:45.820114 10 log.go:181] (0x40016f06e0) (0x4002e052c0) Stream removed, broadcasting: 3 I0113 06:37:45.820184 10 log.go:181] (0x40016f06e0) (0x40010c5680) Stream removed, broadcasting: 5 Jan 13 06:37:45.820: INFO: Exec stderr: "" Jan 13 06:37:45.820: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3186 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 06:37:45.820: INFO: >>> kubeConfig: /root/.kube/config I0113 06:37:45.871546 10 log.go:181] (0x40008af760) (0x40008d08c0) Create stream I0113 06:37:45.871666 10 log.go:181] (0x40008af760) (0x40008d08c0) Stream added, broadcasting: 1 I0113 06:37:45.875595 10 log.go:181] (0x40008af760) Reply frame received for 1 I0113 06:37:45.875849 10 log.go:181] (0x40008af760) (0x400114d220) Create stream I0113 06:37:45.875958 10 log.go:181] (0x40008af760) (0x400114d220) Stream added, broadcasting: 3 I0113 06:37:45.877842 10 log.go:181] (0x40008af760) Reply frame received for 3 I0113 06:37:45.878106 10 log.go:181] (0x40008af760) (0x4002e05360) Create stream I0113 06:37:45.878239 10 log.go:181] (0x40008af760) (0x4002e05360) Stream added, broadcasting: 5 I0113 06:37:45.880001 10 log.go:181] (0x40008af760) Reply frame received for 5 I0113 06:37:45.959204 10 log.go:181] (0x40008af760) Data frame received for 3 I0113 06:37:45.959443 10 log.go:181] (0x400114d220) (3) Data frame handling I0113 06:37:45.959615 10 log.go:181] (0x40008af760) Data frame received for 5 I0113 06:37:45.959824 10 log.go:181] (0x4002e05360) (5) Data frame handling I0113 06:37:45.960006 10 log.go:181] (0x400114d220) (3) Data frame sent I0113 06:37:45.960188 10 log.go:181] (0x40008af760) Data frame received for 3 I0113 06:37:45.960374 10 log.go:181] (0x400114d220) (3) Data frame handling I0113 06:37:45.960950 10 log.go:181] (0x40008af760) Data frame received for 1 I0113 06:37:45.961090 10 log.go:181] (0x40008d08c0) (1) Data frame handling I0113 06:37:45.961223 10 log.go:181] (0x40008d08c0) (1) Data frame sent I0113 06:37:45.961375 10 log.go:181] (0x40008af760) (0x40008d08c0) Stream removed, broadcasting: 1 I0113 06:37:45.961532 10 log.go:181] (0x40008af760) Go away received I0113 06:37:45.961815 10 log.go:181] (0x40008af760) (0x40008d08c0) Stream removed, broadcasting: 1 I0113 06:37:45.961957 10 log.go:181] (0x40008af760) (0x400114d220) Stream removed, broadcasting: 3 I0113 06:37:45.962126 10 log.go:181] (0x40008af760) (0x4002e05360) Stream removed, broadcasting: 5 Jan 13 06:37:45.962: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jan 13 06:37:45.962: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3186 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 06:37:45.962: INFO: >>> kubeConfig: /root/.kube/config I0113 06:37:46.027618 10 log.go:181] (0x4000ca7760) (0x400114d4a0) Create stream I0113 06:37:46.027789 10 log.go:181] (0x4000ca7760) (0x400114d4a0) Stream added, broadcasting: 1 I0113 06:37:46.031104 10 log.go:181] (0x4000ca7760) Reply frame received for 1 I0113 06:37:46.031270 10 log.go:181] (0x4000ca7760) (0x400114d540) Create stream I0113 06:37:46.031349 10 log.go:181] (0x4000ca7760) (0x400114d540) Stream added, broadcasting: 3 I0113 06:37:46.032786 10 log.go:181] (0x4000ca7760) Reply frame received for 3 I0113 06:37:46.033023 10 log.go:181] (0x4000ca7760) (0x40008d0a00) Create stream I0113 06:37:46.033117 10 log.go:181] (0x4000ca7760) (0x40008d0a00) Stream added, broadcasting: 5 I0113 06:37:46.034531 10 log.go:181] (0x4000ca7760) Reply frame received for 5 I0113 06:37:46.093588 10 log.go:181] (0x4000ca7760) Data frame received for 5 I0113 06:37:46.093977 10 log.go:181] (0x40008d0a00) (5) Data frame handling I0113 06:37:46.094304 10 log.go:181] (0x4000ca7760) Data frame received for 3 I0113 06:37:46.094606 10 log.go:181] (0x400114d540) (3) Data frame handling I0113 06:37:46.094799 10 log.go:181] (0x400114d540) (3) Data frame sent I0113 06:37:46.094960 10 log.go:181] (0x4000ca7760) Data frame received for 3 I0113 06:37:46.095108 10 log.go:181] (0x400114d540) (3) Data frame handling I0113 06:37:46.097638 10 log.go:181] (0x4000ca7760) Data frame received for 1 I0113 06:37:46.097806 10 log.go:181] (0x400114d4a0) (1) Data frame handling I0113 06:37:46.098002 10 log.go:181] (0x400114d4a0) (1) Data frame sent I0113 06:37:46.098131 10 log.go:181] (0x4000ca7760) (0x400114d4a0) Stream removed, broadcasting: 1 I0113 06:37:46.098278 10 log.go:181] (0x4000ca7760) Go away received I0113 06:37:46.098928 10 log.go:181] (0x4000ca7760) (0x400114d4a0) Stream removed, broadcasting: 1 I0113 06:37:46.099158 10 log.go:181] (0x4000ca7760) (0x400114d540) Stream removed, broadcasting: 3 I0113 06:37:46.099262 10 log.go:181] (0x4000ca7760) (0x40008d0a00) Stream removed, broadcasting: 5 Jan 13 06:37:46.099: INFO: Exec stderr: "" Jan 13 06:37:46.099: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3186 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 06:37:46.099: INFO: >>> kubeConfig: /root/.kube/config I0113 06:37:46.153700 10 log.go:181] (0x4000ca7e40) (0x400114d7c0) Create stream I0113 06:37:46.153883 10 log.go:181] (0x4000ca7e40) (0x400114d7c0) Stream added, broadcasting: 1 I0113 06:37:46.157554 10 log.go:181] (0x4000ca7e40) Reply frame received for 1 I0113 06:37:46.157775 10 log.go:181] (0x4000ca7e40) (0x4002e05400) Create stream I0113 06:37:46.157881 10 log.go:181] (0x4000ca7e40) (0x4002e05400) Stream added, broadcasting: 3 I0113 06:37:46.159477 10 log.go:181] (0x4000ca7e40) Reply frame received for 3 I0113 06:37:46.159617 10 log.go:181] (0x4000ca7e40) (0x4002e054a0) Create stream I0113 06:37:46.159693 10 log.go:181] (0x4000ca7e40) (0x4002e054a0) Stream added, broadcasting: 5 I0113 06:37:46.161255 10 log.go:181] (0x4000ca7e40) Reply frame received for 5 I0113 06:37:46.233499 10 log.go:181] (0x4000ca7e40) Data frame received for 5 I0113 06:37:46.233683 10 log.go:181] (0x4002e054a0) (5) Data frame handling I0113 06:37:46.233881 10 log.go:181] (0x4000ca7e40) Data frame received for 3 I0113 06:37:46.234042 10 log.go:181] (0x4002e05400) (3) Data frame handling I0113 06:37:46.234222 10 log.go:181] (0x4002e05400) (3) Data frame sent I0113 06:37:46.234385 10 log.go:181] (0x4000ca7e40) Data frame received for 3 I0113 06:37:46.234537 10 log.go:181] (0x4002e05400) (3) Data frame handling I0113 06:37:46.235072 10 log.go:181] (0x4000ca7e40) Data frame received for 1 I0113 06:37:46.235169 10 log.go:181] (0x400114d7c0) (1) Data frame handling I0113 06:37:46.235276 10 log.go:181] (0x400114d7c0) (1) Data frame sent I0113 06:37:46.235373 10 log.go:181] (0x4000ca7e40) (0x400114d7c0) Stream removed, broadcasting: 1 I0113 06:37:46.235479 10 log.go:181] (0x4000ca7e40) Go away received I0113 06:37:46.235780 10 log.go:181] (0x4000ca7e40) (0x400114d7c0) Stream removed, broadcasting: 1 I0113 06:37:46.235895 10 log.go:181] (0x4000ca7e40) (0x4002e05400) Stream removed, broadcasting: 3 I0113 06:37:46.235965 10 log.go:181] (0x4000ca7e40) (0x4002e054a0) Stream removed, broadcasting: 5 Jan 13 06:37:46.236: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jan 13 06:37:46.236: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3186 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 06:37:46.236: INFO: >>> kubeConfig: /root/.kube/config I0113 06:37:46.293921 10 log.go:181] (0x40023fe160) (0x400114dc20) Create stream I0113 06:37:46.294075 10 log.go:181] (0x40023fe160) (0x400114dc20) Stream added, broadcasting: 1 I0113 06:37:46.297896 10 log.go:181] (0x40023fe160) Reply frame received for 1 I0113 06:37:46.298074 10 log.go:181] (0x40023fe160) (0x400114dcc0) Create stream I0113 06:37:46.298169 10 log.go:181] (0x40023fe160) (0x400114dcc0) Stream added, broadcasting: 3 I0113 06:37:46.299992 10 log.go:181] (0x40023fe160) Reply frame received for 3 I0113 06:37:46.300259 10 log.go:181] (0x40023fe160) (0x4003d51e00) Create stream I0113 06:37:46.300456 10 log.go:181] (0x40023fe160) (0x4003d51e00) Stream added, broadcasting: 5 I0113 06:37:46.302819 10 log.go:181] (0x40023fe160) Reply frame received for 5 I0113 06:37:46.372634 10 log.go:181] (0x40023fe160) Data frame received for 3 I0113 06:37:46.372792 10 log.go:181] (0x400114dcc0) (3) Data frame handling I0113 06:37:46.372977 10 log.go:181] (0x400114dcc0) (3) Data frame sent I0113 06:37:46.373054 10 log.go:181] (0x40023fe160) Data frame received for 3 I0113 06:37:46.373114 10 log.go:181] (0x400114dcc0) (3) Data frame handling I0113 06:37:46.373232 10 log.go:181] (0x40023fe160) Data frame received for 5 I0113 06:37:46.373400 10 log.go:181] (0x4003d51e00) (5) Data frame handling I0113 06:37:46.374158 10 log.go:181] (0x40023fe160) Data frame received for 1 I0113 06:37:46.374287 10 log.go:181] (0x400114dc20) (1) Data frame handling I0113 06:37:46.374388 10 log.go:181] (0x400114dc20) (1) Data frame sent I0113 06:37:46.374495 10 log.go:181] (0x40023fe160) (0x400114dc20) Stream removed, broadcasting: 1 I0113 06:37:46.374615 10 log.go:181] (0x40023fe160) Go away received I0113 06:37:46.375165 10 log.go:181] (0x40023fe160) (0x400114dc20) Stream removed, broadcasting: 1 I0113 06:37:46.375355 10 log.go:181] (0x40023fe160) (0x400114dcc0) Stream removed, broadcasting: 3 I0113 06:37:46.375544 10 log.go:181] (0x40023fe160) (0x4003d51e00) Stream removed, broadcasting: 5 Jan 13 06:37:46.375: INFO: Exec stderr: "" Jan 13 06:37:46.375: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3186 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 06:37:46.375: INFO: >>> kubeConfig: /root/.kube/config I0113 06:37:46.431728 10 log.go:181] (0x40008afef0) (0x40008d0f00) Create stream I0113 06:37:46.431854 10 log.go:181] (0x40008afef0) (0x40008d0f00) Stream added, broadcasting: 1 I0113 06:37:46.436150 10 log.go:181] (0x40008afef0) Reply frame received for 1 I0113 06:37:46.436424 10 log.go:181] (0x40008afef0) (0x40008d1040) Create stream I0113 06:37:46.436546 10 log.go:181] (0x40008afef0) (0x40008d1040) Stream added, broadcasting: 3 I0113 06:37:46.438572 10 log.go:181] (0x40008afef0) Reply frame received for 3 I0113 06:37:46.438729 10 log.go:181] (0x40008afef0) (0x40010c57c0) Create stream I0113 06:37:46.438810 10 log.go:181] (0x40008afef0) (0x40010c57c0) Stream added, broadcasting: 5 I0113 06:37:46.440233 10 log.go:181] (0x40008afef0) Reply frame received for 5 I0113 06:37:46.502194 10 log.go:181] (0x40008afef0) Data frame received for 5 I0113 06:37:46.502394 10 log.go:181] (0x40010c57c0) (5) Data frame handling I0113 06:37:46.502628 10 log.go:181] (0x40008afef0) Data frame received for 3 I0113 06:37:46.502811 10 log.go:181] (0x40008d1040) (3) Data frame handling I0113 06:37:46.502928 10 log.go:181] (0x40008d1040) (3) Data frame sent I0113 06:37:46.502999 10 log.go:181] (0x40008afef0) Data frame received for 3 I0113 06:37:46.503063 10 log.go:181] (0x40008d1040) (3) Data frame handling I0113 06:37:46.503687 10 log.go:181] (0x40008afef0) Data frame received for 1 I0113 06:37:46.503771 10 log.go:181] (0x40008d0f00) (1) Data frame handling I0113 06:37:46.503866 10 log.go:181] (0x40008d0f00) (1) Data frame sent I0113 06:37:46.503952 10 log.go:181] (0x40008afef0) (0x40008d0f00) Stream removed, broadcasting: 1 I0113 06:37:46.504131 10 log.go:181] (0x40008afef0) Go away received I0113 06:37:46.504426 10 log.go:181] (0x40008afef0) (0x40008d0f00) Stream removed, broadcasting: 1 I0113 06:37:46.504557 10 log.go:181] (0x40008afef0) (0x40008d1040) Stream removed, broadcasting: 3 I0113 06:37:46.504654 10 log.go:181] (0x40008afef0) (0x40010c57c0) Stream removed, broadcasting: 5 Jan 13 06:37:46.504: INFO: Exec stderr: "" Jan 13 06:37:46.505: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3186 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 06:37:46.505: INFO: >>> kubeConfig: /root/.kube/config I0113 06:37:46.561678 10 log.go:181] (0x4000722630) (0x40027d5680) Create stream I0113 06:37:46.561827 10 log.go:181] (0x4000722630) (0x40027d5680) Stream added, broadcasting: 1 I0113 06:37:46.568187 10 log.go:181] (0x4000722630) Reply frame received for 1 I0113 06:37:46.568392 10 log.go:181] (0x4000722630) (0x4002e05540) Create stream I0113 06:37:46.568502 10 log.go:181] (0x4000722630) (0x4002e05540) Stream added, broadcasting: 3 I0113 06:37:46.570262 10 log.go:181] (0x4000722630) Reply frame received for 3 I0113 06:37:46.570384 10 log.go:181] (0x4000722630) (0x4002e055e0) Create stream I0113 06:37:46.570453 10 log.go:181] (0x4000722630) (0x4002e055e0) Stream added, broadcasting: 5 I0113 06:37:46.571729 10 log.go:181] (0x4000722630) Reply frame received for 5 I0113 06:37:46.623947 10 log.go:181] (0x4000722630) Data frame received for 5 I0113 06:37:46.624116 10 log.go:181] (0x4002e055e0) (5) Data frame handling I0113 06:37:46.624233 10 log.go:181] (0x4000722630) Data frame received for 3 I0113 06:37:46.624334 10 log.go:181] (0x4002e05540) (3) Data frame handling I0113 06:37:46.624445 10 log.go:181] (0x4002e05540) (3) Data frame sent I0113 06:37:46.624552 10 log.go:181] (0x4000722630) Data frame received for 3 I0113 06:37:46.624636 10 log.go:181] (0x4002e05540) (3) Data frame handling I0113 06:37:46.625489 10 log.go:181] (0x4000722630) Data frame received for 1 I0113 06:37:46.625618 10 log.go:181] (0x40027d5680) (1) Data frame handling I0113 06:37:46.625783 10 log.go:181] (0x40027d5680) (1) Data frame sent I0113 06:37:46.625936 10 log.go:181] (0x4000722630) (0x40027d5680) Stream removed, broadcasting: 1 I0113 06:37:46.626091 10 log.go:181] (0x4000722630) Go away received I0113 06:37:46.626331 10 log.go:181] (0x4000722630) (0x40027d5680) Stream removed, broadcasting: 1 I0113 06:37:46.626407 10 log.go:181] (0x4000722630) (0x4002e05540) Stream removed, broadcasting: 3 I0113 06:37:46.626467 10 log.go:181] (0x4000722630) (0x4002e055e0) Stream removed, broadcasting: 5 Jan 13 06:37:46.626: INFO: Exec stderr: "" Jan 13 06:37:46.626: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3186 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 06:37:46.626: INFO: >>> kubeConfig: /root/.kube/config I0113 06:37:46.679070 10 log.go:181] (0x4001a28e70) (0x4003eee1e0) Create stream I0113 06:37:46.679194 10 log.go:181] (0x4001a28e70) (0x4003eee1e0) Stream added, broadcasting: 1 I0113 06:37:46.682981 10 log.go:181] (0x4001a28e70) Reply frame received for 1 I0113 06:37:46.683122 10 log.go:181] (0x4001a28e70) (0x4003eee280) Create stream I0113 06:37:46.683196 10 log.go:181] (0x4001a28e70) (0x4003eee280) Stream added, broadcasting: 3 I0113 06:37:46.684521 10 log.go:181] (0x4001a28e70) Reply frame received for 3 I0113 06:37:46.684661 10 log.go:181] (0x4001a28e70) (0x40010c5860) Create stream I0113 06:37:46.684737 10 log.go:181] (0x4001a28e70) (0x40010c5860) Stream added, broadcasting: 5 I0113 06:37:46.686234 10 log.go:181] (0x4001a28e70) Reply frame received for 5 I0113 06:37:46.768907 10 log.go:181] (0x4001a28e70) Data frame received for 5 I0113 06:37:46.769055 10 log.go:181] (0x40010c5860) (5) Data frame handling I0113 06:37:46.769155 10 log.go:181] (0x4001a28e70) Data frame received for 3 I0113 06:37:46.769239 10 log.go:181] (0x4003eee280) (3) Data frame handling I0113 06:37:46.769330 10 log.go:181] (0x4003eee280) (3) Data frame sent I0113 06:37:46.769411 10 log.go:181] (0x4001a28e70) Data frame received for 3 I0113 06:37:46.769482 10 log.go:181] (0x4003eee280) (3) Data frame handling I0113 06:37:46.771848 10 log.go:181] (0x4001a28e70) Data frame received for 1 I0113 06:37:46.772033 10 log.go:181] (0x4003eee1e0) (1) Data frame handling I0113 06:37:46.772196 10 log.go:181] (0x4003eee1e0) (1) Data frame sent I0113 06:37:46.772376 10 log.go:181] (0x4001a28e70) (0x4003eee1e0) Stream removed, broadcasting: 1 I0113 06:37:46.772580 10 log.go:181] (0x4001a28e70) Go away received I0113 06:37:46.773216 10 log.go:181] (0x4001a28e70) (0x4003eee1e0) Stream removed, broadcasting: 1 I0113 06:37:46.773451 10 log.go:181] (0x4001a28e70) (0x4003eee280) Stream removed, broadcasting: 3 I0113 06:37:46.773609 10 log.go:181] (0x4001a28e70) (0x40010c5860) Stream removed, broadcasting: 5 Jan 13 06:37:46.773: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:37:46.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-3186" for this suite. • [SLOW TEST:15.598 seconds] [k8s.io] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":47,"skipped":822,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:37:46.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 13 06:37:49.747: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 13 06:37:52.507: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746116669, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746116669, loc:(*time.Location)(0x7089440)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746116669, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746116669, loc:(*time.Location)(0x7089440)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 13 06:37:54.635: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746116669, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746116669, loc:(*time.Location)(0x7089440)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746116669, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746116669, loc:(*time.Location)(0x7089440)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 13 06:37:57.554: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 06:37:57.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2828-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:37:58.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2259" for this suite. STEP: Destroying namespace "webhook-2259-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:12.165 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":309,"completed":48,"skipped":835,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:37:58.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 06:37:59.015: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-3458 I0113 06:37:59.078364 10 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-3458, replica count: 1 I0113 06:38:00.129506 10 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0113 06:38:01.130267 10 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0113 06:38:02.130961 10 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 13 06:38:02.273: INFO: Created: latency-svc-wj5jq Jan 13 06:38:02.295: INFO: Got endpoints: latency-svc-wj5jq [61.483943ms] Jan 13 06:38:02.347: INFO: Created: latency-svc-dqk9g Jan 13 06:38:02.355: INFO: Got endpoints: latency-svc-dqk9g [59.209376ms] Jan 13 06:38:02.385: INFO: Created: latency-svc-qvw77 Jan 13 06:38:02.401: INFO: Got endpoints: latency-svc-qvw77 [103.906969ms] Jan 13 06:38:02.445: INFO: Created: latency-svc-pf7jm Jan 13 06:38:02.498: INFO: Got endpoints: latency-svc-pf7jm [199.697013ms] Jan 13 06:38:02.523: INFO: Created: latency-svc-vq68l Jan 13 06:38:02.552: INFO: Got endpoints: latency-svc-vq68l [256.643457ms] Jan 13 06:38:02.689: INFO: Created: latency-svc-z94ff Jan 13 06:38:02.727: INFO: Got endpoints: latency-svc-z94ff [431.15341ms] Jan 13 06:38:02.764: INFO: Created: latency-svc-wlt84 Jan 13 06:38:02.853: INFO: Got endpoints: latency-svc-wlt84 [556.041315ms] Jan 13 06:38:02.870: INFO: Created: latency-svc-vmf8s Jan 13 06:38:02.920: INFO: Got endpoints: latency-svc-vmf8s [624.700202ms] Jan 13 06:38:02.989: INFO: Created: latency-svc-tkpqh Jan 13 06:38:03.059: INFO: Got endpoints: latency-svc-tkpqh [761.511821ms] Jan 13 06:38:03.060: INFO: Created: latency-svc-sfgdt Jan 13 06:38:03.149: INFO: Got endpoints: latency-svc-sfgdt [853.496269ms] Jan 13 06:38:03.239: INFO: Created: latency-svc-2bckh Jan 13 06:38:03.319: INFO: Got endpoints: latency-svc-2bckh [1.019995204s] Jan 13 06:38:03.374: INFO: Created: latency-svc-2w8t7 Jan 13 06:38:03.407: INFO: Got endpoints: latency-svc-2w8t7 [1.11134504s] Jan 13 06:38:03.479: INFO: Created: latency-svc-mdk7r Jan 13 06:38:03.486: INFO: Got endpoints: latency-svc-mdk7r [1.189664363s] Jan 13 06:38:03.507: INFO: Created: latency-svc-xm28b Jan 13 06:38:03.537: INFO: Got endpoints: latency-svc-xm28b [1.240445327s] Jan 13 06:38:03.569: INFO: Created: latency-svc-z2pd5 Jan 13 06:38:03.599: INFO: Got endpoints: latency-svc-z2pd5 [1.302958133s] Jan 13 06:38:03.604: INFO: Created: latency-svc-z52mw Jan 13 06:38:03.619: INFO: Got endpoints: latency-svc-z52mw [1.323262268s] Jan 13 06:38:03.634: INFO: Created: latency-svc-fcwdn Jan 13 06:38:03.647: INFO: Got endpoints: latency-svc-fcwdn [1.291790978s] Jan 13 06:38:03.674: INFO: Created: latency-svc-z4mf5 Jan 13 06:38:03.692: INFO: Got endpoints: latency-svc-z4mf5 [1.290292463s] Jan 13 06:38:03.731: INFO: Created: latency-svc-cdkvq Jan 13 06:38:03.753: INFO: Got endpoints: latency-svc-cdkvq [1.255511821s] Jan 13 06:38:03.755: INFO: Created: latency-svc-jtxcj Jan 13 06:38:03.781: INFO: Got endpoints: latency-svc-jtxcj [1.228049601s] Jan 13 06:38:03.808: INFO: Created: latency-svc-trwpn Jan 13 06:38:03.823: INFO: Got endpoints: latency-svc-trwpn [1.096058874s] Jan 13 06:38:03.869: INFO: Created: latency-svc-lp26q Jan 13 06:38:03.890: INFO: Created: latency-svc-27tm6 Jan 13 06:38:03.891: INFO: Got endpoints: latency-svc-lp26q [1.037103923s] Jan 13 06:38:03.915: INFO: Got endpoints: latency-svc-27tm6 [993.912426ms] Jan 13 06:38:03.939: INFO: Created: latency-svc-bk4bn Jan 13 06:38:03.953: INFO: Got endpoints: latency-svc-bk4bn [894.402718ms] Jan 13 06:38:04.014: INFO: Created: latency-svc-kfbpf Jan 13 06:38:04.031: INFO: Got endpoints: latency-svc-kfbpf [881.048488ms] Jan 13 06:38:04.031: INFO: Created: latency-svc-9kx89 Jan 13 06:38:04.043: INFO: Got endpoints: latency-svc-9kx89 [723.265382ms] Jan 13 06:38:04.061: INFO: Created: latency-svc-2zmpm Jan 13 06:38:04.075: INFO: Got endpoints: latency-svc-2zmpm [667.212361ms] Jan 13 06:38:04.088: INFO: Created: latency-svc-nqphl Jan 13 06:38:04.102: INFO: Got endpoints: latency-svc-nqphl [616.245547ms] Jan 13 06:38:04.144: INFO: Created: latency-svc-qwqkw Jan 13 06:38:04.163: INFO: Got endpoints: latency-svc-qwqkw [626.139877ms] Jan 13 06:38:04.205: INFO: Created: latency-svc-c68zb Jan 13 06:38:04.232: INFO: Got endpoints: latency-svc-c68zb [632.928203ms] Jan 13 06:38:04.317: INFO: Created: latency-svc-tr6z8 Jan 13 06:38:04.339: INFO: Got endpoints: latency-svc-tr6z8 [719.796098ms] Jan 13 06:38:04.455: INFO: Created: latency-svc-ft9r2 Jan 13 06:38:04.512: INFO: Got endpoints: latency-svc-ft9r2 [864.793067ms] Jan 13 06:38:04.522: INFO: Created: latency-svc-5gsdg Jan 13 06:38:04.538: INFO: Got endpoints: latency-svc-5gsdg [846.257522ms] Jan 13 06:38:04.608: INFO: Created: latency-svc-jbdqk Jan 13 06:38:04.631: INFO: Got endpoints: latency-svc-jbdqk [877.441764ms] Jan 13 06:38:04.668: INFO: Created: latency-svc-thtp8 Jan 13 06:38:04.754: INFO: Got endpoints: latency-svc-thtp8 [973.031792ms] Jan 13 06:38:04.788: INFO: Created: latency-svc-s8rw6 Jan 13 06:38:04.805: INFO: Got endpoints: latency-svc-s8rw6 [981.782358ms] Jan 13 06:38:04.822: INFO: Created: latency-svc-6rjfl Jan 13 06:38:04.851: INFO: Got endpoints: latency-svc-6rjfl [959.866848ms] Jan 13 06:38:04.910: INFO: Created: latency-svc-kzwzb Jan 13 06:38:04.917: INFO: Got endpoints: latency-svc-kzwzb [1.002192233s] Jan 13 06:38:04.979: INFO: Created: latency-svc-6cd6f Jan 13 06:38:05.037: INFO: Got endpoints: latency-svc-6cd6f [1.083333948s] Jan 13 06:38:05.044: INFO: Created: latency-svc-2lkkx Jan 13 06:38:05.061: INFO: Got endpoints: latency-svc-2lkkx [1.030152115s] Jan 13 06:38:05.129: INFO: Created: latency-svc-br8d2 Jan 13 06:38:05.180: INFO: Got endpoints: latency-svc-br8d2 [1.137490429s] Jan 13 06:38:05.207: INFO: Created: latency-svc-627kr Jan 13 06:38:05.244: INFO: Got endpoints: latency-svc-627kr [1.16858279s] Jan 13 06:38:05.272: INFO: Created: latency-svc-clgkw Jan 13 06:38:05.299: INFO: Got endpoints: latency-svc-clgkw [1.196974154s] Jan 13 06:38:05.337: INFO: Created: latency-svc-ztwjb Jan 13 06:38:05.351: INFO: Got endpoints: latency-svc-ztwjb [1.187209935s] Jan 13 06:38:05.425: INFO: Created: latency-svc-74t7z Jan 13 06:38:05.446: INFO: Got endpoints: latency-svc-74t7z [1.214430935s] Jan 13 06:38:05.477: INFO: Created: latency-svc-wjkwk Jan 13 06:38:05.492: INFO: Got endpoints: latency-svc-wjkwk [1.152683402s] Jan 13 06:38:05.519: INFO: Created: latency-svc-qpzrh Jan 13 06:38:05.563: INFO: Got endpoints: latency-svc-qpzrh [1.050736929s] Jan 13 06:38:05.589: INFO: Created: latency-svc-phq67 Jan 13 06:38:05.621: INFO: Got endpoints: latency-svc-phq67 [1.082190664s] Jan 13 06:38:05.657: INFO: Created: latency-svc-pqngx Jan 13 06:38:05.690: INFO: Got endpoints: latency-svc-pqngx [1.058425371s] Jan 13 06:38:05.727: INFO: Created: latency-svc-hvs2d Jan 13 06:38:05.759: INFO: Got endpoints: latency-svc-hvs2d [1.004139719s] Jan 13 06:38:05.827: INFO: Created: latency-svc-z4ns4 Jan 13 06:38:05.850: INFO: Created: latency-svc-cbsb7 Jan 13 06:38:05.850: INFO: Got endpoints: latency-svc-z4ns4 [1.044415095s] Jan 13 06:38:05.880: INFO: Got endpoints: latency-svc-cbsb7 [1.028549236s] Jan 13 06:38:05.999: INFO: Created: latency-svc-54gtv Jan 13 06:38:06.042: INFO: Got endpoints: latency-svc-54gtv [1.124399296s] Jan 13 06:38:06.060: INFO: Created: latency-svc-s5w6q Jan 13 06:38:06.115: INFO: Created: latency-svc-dgn8x Jan 13 06:38:06.115: INFO: Got endpoints: latency-svc-s5w6q [1.077974491s] Jan 13 06:38:06.128: INFO: Got endpoints: latency-svc-dgn8x [1.066875805s] Jan 13 06:38:06.153: INFO: Created: latency-svc-j9b2c Jan 13 06:38:06.167: INFO: Got endpoints: latency-svc-j9b2c [986.002245ms] Jan 13 06:38:06.182: INFO: Created: latency-svc-mwqpr Jan 13 06:38:06.197: INFO: Got endpoints: latency-svc-mwqpr [952.754405ms] Jan 13 06:38:06.252: INFO: Created: latency-svc-c2qmp Jan 13 06:38:06.276: INFO: Created: latency-svc-gpstv Jan 13 06:38:06.277: INFO: Got endpoints: latency-svc-c2qmp [977.466275ms] Jan 13 06:38:06.306: INFO: Got endpoints: latency-svc-gpstv [954.651105ms] Jan 13 06:38:06.339: INFO: Created: latency-svc-jm957 Jan 13 06:38:06.371: INFO: Got endpoints: latency-svc-jm957 [923.991641ms] Jan 13 06:38:06.387: INFO: Created: latency-svc-zcgtg Jan 13 06:38:06.403: INFO: Got endpoints: latency-svc-zcgtg [910.968515ms] Jan 13 06:38:06.423: INFO: Created: latency-svc-hpm56 Jan 13 06:38:06.439: INFO: Got endpoints: latency-svc-hpm56 [875.589943ms] Jan 13 06:38:06.466: INFO: Created: latency-svc-jqshm Jan 13 06:38:06.497: INFO: Got endpoints: latency-svc-jqshm [876.4453ms] Jan 13 06:38:06.539: INFO: Created: latency-svc-8kqjj Jan 13 06:38:06.561: INFO: Got endpoints: latency-svc-8kqjj [871.088594ms] Jan 13 06:38:06.591: INFO: Created: latency-svc-k2h25 Jan 13 06:38:06.646: INFO: Got endpoints: latency-svc-k2h25 [887.53048ms] Jan 13 06:38:06.670: INFO: Created: latency-svc-9m27t Jan 13 06:38:06.688: INFO: Got endpoints: latency-svc-9m27t [837.284716ms] Jan 13 06:38:06.772: INFO: Created: latency-svc-xpf86 Jan 13 06:38:06.776: INFO: Got endpoints: latency-svc-xpf86 [895.876844ms] Jan 13 06:38:06.831: INFO: Created: latency-svc-n8h4c Jan 13 06:38:06.843: INFO: Got endpoints: latency-svc-n8h4c [801.19027ms] Jan 13 06:38:06.955: INFO: Created: latency-svc-84n9h Jan 13 06:38:06.982: INFO: Created: latency-svc-vj226 Jan 13 06:38:06.982: INFO: Got endpoints: latency-svc-84n9h [866.825926ms] Jan 13 06:38:07.023: INFO: Got endpoints: latency-svc-vj226 [894.942402ms] Jan 13 06:38:07.115: INFO: Created: latency-svc-qxx5f Jan 13 06:38:07.121: INFO: Got endpoints: latency-svc-qxx5f [954.20654ms] Jan 13 06:38:07.176: INFO: Created: latency-svc-n79d8 Jan 13 06:38:07.189: INFO: Got endpoints: latency-svc-n79d8 [991.605584ms] Jan 13 06:38:07.259: INFO: Created: latency-svc-r4v9c Jan 13 06:38:07.265: INFO: Got endpoints: latency-svc-r4v9c [987.686253ms] Jan 13 06:38:07.339: INFO: Created: latency-svc-p4zz6 Jan 13 06:38:07.349: INFO: Got endpoints: latency-svc-p4zz6 [1.042898905s] Jan 13 06:38:07.430: INFO: Created: latency-svc-vb5wq Jan 13 06:38:07.447: INFO: Got endpoints: latency-svc-vb5wq [1.0765249s] Jan 13 06:38:07.481: INFO: Created: latency-svc-x5dsn Jan 13 06:38:07.546: INFO: Got endpoints: latency-svc-x5dsn [1.143223087s] Jan 13 06:38:07.571: INFO: Created: latency-svc-qnbgh Jan 13 06:38:07.599: INFO: Got endpoints: latency-svc-qnbgh [1.159682728s] Jan 13 06:38:07.619: INFO: Created: latency-svc-b2pdx Jan 13 06:38:07.633: INFO: Got endpoints: latency-svc-b2pdx [1.13554341s] Jan 13 06:38:07.690: INFO: Created: latency-svc-r2t9d Jan 13 06:38:07.699: INFO: Got endpoints: latency-svc-r2t9d [1.137754499s] Jan 13 06:38:07.750: INFO: Created: latency-svc-dfvkr Jan 13 06:38:07.763: INFO: Got endpoints: latency-svc-dfvkr [1.115971336s] Jan 13 06:38:07.809: INFO: Created: latency-svc-wnk8c Jan 13 06:38:07.839: INFO: Got endpoints: latency-svc-wnk8c [1.151282104s] Jan 13 06:38:07.894: INFO: Created: latency-svc-64f4c Jan 13 06:38:07.906: INFO: Got endpoints: latency-svc-64f4c [1.129841826s] Jan 13 06:38:07.974: INFO: Created: latency-svc-cvz6p Jan 13 06:38:07.985: INFO: Got endpoints: latency-svc-cvz6p [1.14232597s] Jan 13 06:38:08.003: INFO: Created: latency-svc-vtddw Jan 13 06:38:08.015: INFO: Got endpoints: latency-svc-vtddw [1.032467987s] Jan 13 06:38:08.033: INFO: Created: latency-svc-p2xbz Jan 13 06:38:08.078: INFO: Got endpoints: latency-svc-p2xbz [1.054755977s] Jan 13 06:38:08.090: INFO: Created: latency-svc-x5mv2 Jan 13 06:38:08.108: INFO: Got endpoints: latency-svc-x5mv2 [987.130892ms] Jan 13 06:38:08.126: INFO: Created: latency-svc-pc2q6 Jan 13 06:38:08.138: INFO: Got endpoints: latency-svc-pc2q6 [949.136847ms] Jan 13 06:38:08.158: INFO: Created: latency-svc-j8xxj Jan 13 06:38:08.172: INFO: Got endpoints: latency-svc-j8xxj [907.036721ms] Jan 13 06:38:08.215: INFO: Created: latency-svc-qg7lh Jan 13 06:38:08.220: INFO: Got endpoints: latency-svc-qg7lh [870.613016ms] Jan 13 06:38:08.265: INFO: Created: latency-svc-xxjt9 Jan 13 06:38:08.294: INFO: Got endpoints: latency-svc-xxjt9 [846.531197ms] Jan 13 06:38:08.371: INFO: Created: latency-svc-f7rpc Jan 13 06:38:08.393: INFO: Created: latency-svc-hrhv4 Jan 13 06:38:08.393: INFO: Got endpoints: latency-svc-f7rpc [846.193375ms] Jan 13 06:38:08.417: INFO: Got endpoints: latency-svc-hrhv4 [817.432411ms] Jan 13 06:38:08.446: INFO: Created: latency-svc-dvdh7 Jan 13 06:38:08.463: INFO: Got endpoints: latency-svc-dvdh7 [829.519858ms] Jan 13 06:38:08.526: INFO: Created: latency-svc-bkf7k Jan 13 06:38:08.541: INFO: Got endpoints: latency-svc-bkf7k [841.78654ms] Jan 13 06:38:08.574: INFO: Created: latency-svc-mvw62 Jan 13 06:38:08.589: INFO: Got endpoints: latency-svc-mvw62 [826.419647ms] Jan 13 06:38:08.609: INFO: Created: latency-svc-d46td Jan 13 06:38:08.620: INFO: Got endpoints: latency-svc-d46td [780.913188ms] Jan 13 06:38:08.670: INFO: Created: latency-svc-m26zq Jan 13 06:38:08.691: INFO: Got endpoints: latency-svc-m26zq [785.078729ms] Jan 13 06:38:08.721: INFO: Created: latency-svc-54xz2 Jan 13 06:38:08.729: INFO: Got endpoints: latency-svc-54xz2 [743.797543ms] Jan 13 06:38:08.827: INFO: Created: latency-svc-gxjvl Jan 13 06:38:08.843: INFO: Created: latency-svc-dkmnw Jan 13 06:38:08.844: INFO: Got endpoints: latency-svc-gxjvl [829.083859ms] Jan 13 06:38:08.870: INFO: Got endpoints: latency-svc-dkmnw [791.736689ms] Jan 13 06:38:08.901: INFO: Created: latency-svc-8665g Jan 13 06:38:08.919: INFO: Got endpoints: latency-svc-8665g [810.256545ms] Jan 13 06:38:08.970: INFO: Created: latency-svc-w99x2 Jan 13 06:38:08.993: INFO: Got endpoints: latency-svc-w99x2 [855.080969ms] Jan 13 06:38:08.994: INFO: Created: latency-svc-m5dnf Jan 13 06:38:09.052: INFO: Got endpoints: latency-svc-m5dnf [879.506825ms] Jan 13 06:38:09.126: INFO: Created: latency-svc-st5kl Jan 13 06:38:09.163: INFO: Got endpoints: latency-svc-st5kl [942.832325ms] Jan 13 06:38:09.163: INFO: Created: latency-svc-885jj Jan 13 06:38:09.221: INFO: Got endpoints: latency-svc-885jj [926.971784ms] Jan 13 06:38:09.270: INFO: Created: latency-svc-79qdx Jan 13 06:38:09.302: INFO: Got endpoints: latency-svc-79qdx [908.577765ms] Jan 13 06:38:09.302: INFO: Created: latency-svc-c7lxv Jan 13 06:38:09.329: INFO: Got endpoints: latency-svc-c7lxv [911.636431ms] Jan 13 06:38:09.352: INFO: Created: latency-svc-2qv4m Jan 13 06:38:09.363: INFO: Got endpoints: latency-svc-2qv4m [899.559712ms] Jan 13 06:38:09.401: INFO: Created: latency-svc-xx8v9 Jan 13 06:38:09.418: INFO: Created: latency-svc-xkc5x Jan 13 06:38:09.418: INFO: Got endpoints: latency-svc-xx8v9 [877.010778ms] Jan 13 06:38:09.447: INFO: Got endpoints: latency-svc-xkc5x [857.551045ms] Jan 13 06:38:09.472: INFO: Created: latency-svc-5682t Jan 13 06:38:09.496: INFO: Got endpoints: latency-svc-5682t [875.203128ms] Jan 13 06:38:09.569: INFO: Created: latency-svc-7k5c6 Jan 13 06:38:09.580: INFO: Got endpoints: latency-svc-7k5c6 [888.906497ms] Jan 13 06:38:09.604: INFO: Created: latency-svc-6cr2l Jan 13 06:38:09.622: INFO: Got endpoints: latency-svc-6cr2l [892.821132ms] Jan 13 06:38:09.651: INFO: Created: latency-svc-gngfk Jan 13 06:38:09.701: INFO: Got endpoints: latency-svc-gngfk [856.957228ms] Jan 13 06:38:09.711: INFO: Created: latency-svc-snvqx Jan 13 06:38:09.743: INFO: Got endpoints: latency-svc-snvqx [872.628664ms] Jan 13 06:38:09.766: INFO: Created: latency-svc-cs9x4 Jan 13 06:38:09.785: INFO: Got endpoints: latency-svc-cs9x4 [866.271993ms] Jan 13 06:38:09.846: INFO: Created: latency-svc-h9mdz Jan 13 06:38:09.867: INFO: Created: latency-svc-hck5l Jan 13 06:38:09.868: INFO: Got endpoints: latency-svc-h9mdz [874.512867ms] Jan 13 06:38:09.897: INFO: Got endpoints: latency-svc-hck5l [844.70296ms] Jan 13 06:38:09.936: INFO: Created: latency-svc-4f8hp Jan 13 06:38:09.971: INFO: Got endpoints: latency-svc-4f8hp [808.081292ms] Jan 13 06:38:09.976: INFO: Created: latency-svc-tfg4s Jan 13 06:38:09.994: INFO: Got endpoints: latency-svc-tfg4s [772.387615ms] Jan 13 06:38:10.013: INFO: Created: latency-svc-cqkkz Jan 13 06:38:10.053: INFO: Got endpoints: latency-svc-cqkkz [750.989766ms] Jan 13 06:38:10.102: INFO: Created: latency-svc-jxlfl Jan 13 06:38:10.145: INFO: Created: latency-svc-29vp7 Jan 13 06:38:10.145: INFO: Got endpoints: latency-svc-jxlfl [816.232957ms] Jan 13 06:38:10.175: INFO: Got endpoints: latency-svc-29vp7 [811.672743ms] Jan 13 06:38:10.246: INFO: Created: latency-svc-f6smk Jan 13 06:38:10.300: INFO: Got endpoints: latency-svc-f6smk [881.082451ms] Jan 13 06:38:10.301: INFO: Created: latency-svc-6sxhz Jan 13 06:38:10.344: INFO: Got endpoints: latency-svc-6sxhz [896.769875ms] Jan 13 06:38:10.383: INFO: Created: latency-svc-km9rn Jan 13 06:38:10.414: INFO: Got endpoints: latency-svc-km9rn [918.136163ms] Jan 13 06:38:10.414: INFO: Created: latency-svc-8xf9n Jan 13 06:38:10.448: INFO: Got endpoints: latency-svc-8xf9n [868.301178ms] Jan 13 06:38:10.517: INFO: Created: latency-svc-kqfsx Jan 13 06:38:10.553: INFO: Got endpoints: latency-svc-kqfsx [930.025511ms] Jan 13 06:38:10.553: INFO: Created: latency-svc-qh44b Jan 13 06:38:10.577: INFO: Got endpoints: latency-svc-qh44b [875.022696ms] Jan 13 06:38:10.608: INFO: Created: latency-svc-z9kqg Jan 13 06:38:10.678: INFO: Got endpoints: latency-svc-z9kqg [934.440734ms] Jan 13 06:38:10.679: INFO: Created: latency-svc-chh9l Jan 13 06:38:10.682: INFO: Got endpoints: latency-svc-chh9l [896.323932ms] Jan 13 06:38:10.726: INFO: Created: latency-svc-tsbmj Jan 13 06:38:10.736: INFO: Got endpoints: latency-svc-tsbmj [867.941931ms] Jan 13 06:38:10.756: INFO: Created: latency-svc-vzhtv Jan 13 06:38:10.803: INFO: Got endpoints: latency-svc-vzhtv [905.832443ms] Jan 13 06:38:10.816: INFO: Created: latency-svc-qkrfj Jan 13 06:38:10.840: INFO: Got endpoints: latency-svc-qkrfj [869.216504ms] Jan 13 06:38:10.869: INFO: Created: latency-svc-7pb8l Jan 13 06:38:10.886: INFO: Got endpoints: latency-svc-7pb8l [891.878082ms] Jan 13 06:38:10.935: INFO: Created: latency-svc-fpj7z Jan 13 06:38:10.941: INFO: Got endpoints: latency-svc-fpj7z [887.730036ms] Jan 13 06:38:10.962: INFO: Created: latency-svc-lb2fc Jan 13 06:38:10.973: INFO: Got endpoints: latency-svc-lb2fc [828.059656ms] Jan 13 06:38:10.997: INFO: Created: latency-svc-gmjzf Jan 13 06:38:11.003: INFO: Got endpoints: latency-svc-gmjzf [828.127239ms] Jan 13 06:38:11.020: INFO: Created: latency-svc-hmmsj Jan 13 06:38:11.085: INFO: Got endpoints: latency-svc-hmmsj [784.951823ms] Jan 13 06:38:11.145: INFO: Created: latency-svc-rrsrf Jan 13 06:38:11.161: INFO: Got endpoints: latency-svc-rrsrf [816.991042ms] Jan 13 06:38:11.252: INFO: Created: latency-svc-fhcp7 Jan 13 06:38:11.285: INFO: Created: latency-svc-lb78q Jan 13 06:38:11.286: INFO: Got endpoints: latency-svc-fhcp7 [871.734667ms] Jan 13 06:38:11.319: INFO: Got endpoints: latency-svc-lb78q [870.426472ms] Jan 13 06:38:11.377: INFO: Created: latency-svc-c9dnh Jan 13 06:38:11.390: INFO: Got endpoints: latency-svc-c9dnh [837.354523ms] Jan 13 06:38:11.408: INFO: Created: latency-svc-cv4zq Jan 13 06:38:11.420: INFO: Got endpoints: latency-svc-cv4zq [842.57786ms] Jan 13 06:38:11.440: INFO: Created: latency-svc-qqvgt Jan 13 06:38:11.456: INFO: Got endpoints: latency-svc-qqvgt [778.339805ms] Jan 13 06:38:11.477: INFO: Created: latency-svc-4zlcg Jan 13 06:38:11.509: INFO: Got endpoints: latency-svc-4zlcg [826.646504ms] Jan 13 06:38:11.528: INFO: Created: latency-svc-2swsp Jan 13 06:38:11.554: INFO: Got endpoints: latency-svc-2swsp [817.262049ms] Jan 13 06:38:11.571: INFO: Created: latency-svc-bmnw4 Jan 13 06:38:11.588: INFO: Got endpoints: latency-svc-bmnw4 [784.450191ms] Jan 13 06:38:11.642: INFO: Created: latency-svc-wnbzk Jan 13 06:38:11.675: INFO: Created: latency-svc-rr7rs Jan 13 06:38:11.676: INFO: Got endpoints: latency-svc-wnbzk [835.042564ms] Jan 13 06:38:11.699: INFO: Got endpoints: latency-svc-rr7rs [812.785651ms] Jan 13 06:38:11.723: INFO: Created: latency-svc-c2j6f Jan 13 06:38:11.736: INFO: Got endpoints: latency-svc-c2j6f [794.581118ms] Jan 13 06:38:11.802: INFO: Created: latency-svc-v9qrz Jan 13 06:38:11.811: INFO: Got endpoints: latency-svc-v9qrz [837.344314ms] Jan 13 06:38:11.837: INFO: Created: latency-svc-75ldl Jan 13 06:38:11.849: INFO: Got endpoints: latency-svc-75ldl [845.100608ms] Jan 13 06:38:11.873: INFO: Created: latency-svc-qd998 Jan 13 06:38:11.888: INFO: Got endpoints: latency-svc-qd998 [802.848742ms] Jan 13 06:38:11.941: INFO: Created: latency-svc-bzmx2 Jan 13 06:38:11.947: INFO: Got endpoints: latency-svc-bzmx2 [785.50274ms] Jan 13 06:38:11.998: INFO: Created: latency-svc-p6cj2 Jan 13 06:38:12.020: INFO: Got endpoints: latency-svc-p6cj2 [733.584232ms] Jan 13 06:38:12.072: INFO: Created: latency-svc-gh77c Jan 13 06:38:12.094: INFO: Created: latency-svc-tmnhm Jan 13 06:38:12.095: INFO: Got endpoints: latency-svc-gh77c [775.400918ms] Jan 13 06:38:12.109: INFO: Got endpoints: latency-svc-tmnhm [718.462419ms] Jan 13 06:38:12.128: INFO: Created: latency-svc-fbqkw Jan 13 06:38:12.145: INFO: Got endpoints: latency-svc-fbqkw [725.459335ms] Jan 13 06:38:12.172: INFO: Created: latency-svc-n62dl Jan 13 06:38:12.205: INFO: Got endpoints: latency-svc-n62dl [748.062429ms] Jan 13 06:38:12.220: INFO: Created: latency-svc-k7bdw Jan 13 06:38:12.232: INFO: Got endpoints: latency-svc-k7bdw [722.558214ms] Jan 13 06:38:12.263: INFO: Created: latency-svc-hf57p Jan 13 06:38:12.281: INFO: Got endpoints: latency-svc-hf57p [726.613657ms] Jan 13 06:38:12.354: INFO: Created: latency-svc-t2lnn Jan 13 06:38:12.375: INFO: Got endpoints: latency-svc-t2lnn [786.370933ms] Jan 13 06:38:12.379: INFO: Created: latency-svc-fmwmp Jan 13 06:38:12.399: INFO: Got endpoints: latency-svc-fmwmp [722.542117ms] Jan 13 06:38:12.430: INFO: Created: latency-svc-5m54n Jan 13 06:38:12.497: INFO: Got endpoints: latency-svc-5m54n [797.712561ms] Jan 13 06:38:12.514: INFO: Created: latency-svc-tc48b Jan 13 06:38:12.529: INFO: Got endpoints: latency-svc-tc48b [792.811749ms] Jan 13 06:38:12.592: INFO: Created: latency-svc-mkpld Jan 13 06:38:12.619: INFO: Got endpoints: latency-svc-mkpld [807.488315ms] Jan 13 06:38:12.635: INFO: Created: latency-svc-5mg7s Jan 13 06:38:12.649: INFO: Got endpoints: latency-svc-5mg7s [800.228431ms] Jan 13 06:38:12.689: INFO: Created: latency-svc-rfbjx Jan 13 06:38:12.749: INFO: Got endpoints: latency-svc-rfbjx [860.968756ms] Jan 13 06:38:12.777: INFO: Created: latency-svc-44m8s Jan 13 06:38:12.793: INFO: Got endpoints: latency-svc-44m8s [845.572638ms] Jan 13 06:38:12.838: INFO: Created: latency-svc-zhs5x Jan 13 06:38:12.847: INFO: Got endpoints: latency-svc-zhs5x [826.833172ms] Jan 13 06:38:12.898: INFO: Created: latency-svc-f77wd Jan 13 06:38:12.937: INFO: Created: latency-svc-rqg5b Jan 13 06:38:12.937: INFO: Got endpoints: latency-svc-f77wd [842.411218ms] Jan 13 06:38:12.974: INFO: Got endpoints: latency-svc-rqg5b [865.398644ms] Jan 13 06:38:13.048: INFO: Created: latency-svc-7955z Jan 13 06:38:13.071: INFO: Got endpoints: latency-svc-7955z [925.236938ms] Jan 13 06:38:13.072: INFO: Created: latency-svc-jvnvl Jan 13 06:38:13.095: INFO: Got endpoints: latency-svc-jvnvl [890.587665ms] Jan 13 06:38:13.115: INFO: Created: latency-svc-4tv54 Jan 13 06:38:13.125: INFO: Got endpoints: latency-svc-4tv54 [893.166642ms] Jan 13 06:38:13.186: INFO: Created: latency-svc-7n2cg Jan 13 06:38:13.194: INFO: Got endpoints: latency-svc-7n2cg [912.692205ms] Jan 13 06:38:13.227: INFO: Created: latency-svc-d464f Jan 13 06:38:13.330: INFO: Got endpoints: latency-svc-d464f [955.485829ms] Jan 13 06:38:13.343: INFO: Created: latency-svc-7gfb9 Jan 13 06:38:13.361: INFO: Got endpoints: latency-svc-7gfb9 [962.609279ms] Jan 13 06:38:13.412: INFO: Created: latency-svc-zphwn Jan 13 06:38:13.421: INFO: Got endpoints: latency-svc-zphwn [923.593489ms] Jan 13 06:38:13.486: INFO: Created: latency-svc-nvdkx Jan 13 06:38:13.509: INFO: Created: latency-svc-69pch Jan 13 06:38:13.509: INFO: Got endpoints: latency-svc-nvdkx [980.360273ms] Jan 13 06:38:13.532: INFO: Got endpoints: latency-svc-69pch [913.310408ms] Jan 13 06:38:13.569: INFO: Created: latency-svc-cnf84 Jan 13 06:38:13.618: INFO: Got endpoints: latency-svc-cnf84 [968.610395ms] Jan 13 06:38:13.649: INFO: Created: latency-svc-wzh64 Jan 13 06:38:13.659: INFO: Got endpoints: latency-svc-wzh64 [909.772764ms] Jan 13 06:38:13.673: INFO: Created: latency-svc-xsdvg Jan 13 06:38:13.685: INFO: Got endpoints: latency-svc-xsdvg [891.943719ms] Jan 13 06:38:13.693: INFO: Created: latency-svc-76b45 Jan 13 06:38:13.712: INFO: Got endpoints: latency-svc-76b45 [864.472939ms] Jan 13 06:38:13.755: INFO: Created: latency-svc-bdc48 Jan 13 06:38:13.785: INFO: Got endpoints: latency-svc-bdc48 [847.427098ms] Jan 13 06:38:13.817: INFO: Created: latency-svc-sdsbd Jan 13 06:38:13.833: INFO: Got endpoints: latency-svc-sdsbd [857.949198ms] Jan 13 06:38:13.881: INFO: Created: latency-svc-6gnr6 Jan 13 06:38:13.938: INFO: Created: latency-svc-nxkbz Jan 13 06:38:13.939: INFO: Got endpoints: latency-svc-6gnr6 [868.491006ms] Jan 13 06:38:13.969: INFO: Got endpoints: latency-svc-nxkbz [873.897249ms] Jan 13 06:38:14.000: INFO: Created: latency-svc-hlrdm Jan 13 06:38:14.019: INFO: Created: latency-svc-jcg2g Jan 13 06:38:14.019: INFO: Got endpoints: latency-svc-hlrdm [893.885175ms] Jan 13 06:38:14.042: INFO: Got endpoints: latency-svc-jcg2g [848.216949ms] Jan 13 06:38:14.070: INFO: Created: latency-svc-mk22q Jan 13 06:38:14.087: INFO: Got endpoints: latency-svc-mk22q [756.024546ms] Jan 13 06:38:14.132: INFO: Created: latency-svc-xdr4t Jan 13 06:38:14.165: INFO: Created: latency-svc-rs7ff Jan 13 06:38:14.165: INFO: Got endpoints: latency-svc-xdr4t [803.15069ms] Jan 13 06:38:14.187: INFO: Got endpoints: latency-svc-rs7ff [765.769566ms] Jan 13 06:38:14.218: INFO: Created: latency-svc-2k7b4 Jan 13 06:38:14.251: INFO: Got endpoints: latency-svc-2k7b4 [741.17457ms] Jan 13 06:38:14.290: INFO: Created: latency-svc-qddj4 Jan 13 06:38:14.332: INFO: Got endpoints: latency-svc-qddj4 [799.889011ms] Jan 13 06:38:14.401: INFO: Created: latency-svc-jmvgd Jan 13 06:38:14.444: INFO: Created: latency-svc-hmbfm Jan 13 06:38:14.445: INFO: Got endpoints: latency-svc-jmvgd [826.782955ms] Jan 13 06:38:14.546: INFO: Got endpoints: latency-svc-hmbfm [886.791966ms] Jan 13 06:38:14.573: INFO: Created: latency-svc-km7bw Jan 13 06:38:14.590: INFO: Got endpoints: latency-svc-km7bw [904.569529ms] Jan 13 06:38:14.627: INFO: Created: latency-svc-cnkgj Jan 13 06:38:14.642: INFO: Got endpoints: latency-svc-cnkgj [930.235095ms] Jan 13 06:38:14.644: INFO: Latencies: [59.209376ms 103.906969ms 199.697013ms 256.643457ms 431.15341ms 556.041315ms 616.245547ms 624.700202ms 626.139877ms 632.928203ms 667.212361ms 718.462419ms 719.796098ms 722.542117ms 722.558214ms 723.265382ms 725.459335ms 726.613657ms 733.584232ms 741.17457ms 743.797543ms 748.062429ms 750.989766ms 756.024546ms 761.511821ms 765.769566ms 772.387615ms 775.400918ms 778.339805ms 780.913188ms 784.450191ms 784.951823ms 785.078729ms 785.50274ms 786.370933ms 791.736689ms 792.811749ms 794.581118ms 797.712561ms 799.889011ms 800.228431ms 801.19027ms 802.848742ms 803.15069ms 807.488315ms 808.081292ms 810.256545ms 811.672743ms 812.785651ms 816.232957ms 816.991042ms 817.262049ms 817.432411ms 826.419647ms 826.646504ms 826.782955ms 826.833172ms 828.059656ms 828.127239ms 829.083859ms 829.519858ms 835.042564ms 837.284716ms 837.344314ms 837.354523ms 841.78654ms 842.411218ms 842.57786ms 844.70296ms 845.100608ms 845.572638ms 846.193375ms 846.257522ms 846.531197ms 847.427098ms 848.216949ms 853.496269ms 855.080969ms 856.957228ms 857.551045ms 857.949198ms 860.968756ms 864.472939ms 864.793067ms 865.398644ms 866.271993ms 866.825926ms 867.941931ms 868.301178ms 868.491006ms 869.216504ms 870.426472ms 870.613016ms 871.088594ms 871.734667ms 872.628664ms 873.897249ms 874.512867ms 875.022696ms 875.203128ms 875.589943ms 876.4453ms 877.010778ms 877.441764ms 879.506825ms 881.048488ms 881.082451ms 886.791966ms 887.53048ms 887.730036ms 888.906497ms 890.587665ms 891.878082ms 891.943719ms 892.821132ms 893.166642ms 893.885175ms 894.402718ms 894.942402ms 895.876844ms 896.323932ms 896.769875ms 899.559712ms 904.569529ms 905.832443ms 907.036721ms 908.577765ms 909.772764ms 910.968515ms 911.636431ms 912.692205ms 913.310408ms 918.136163ms 923.593489ms 923.991641ms 925.236938ms 926.971784ms 930.025511ms 930.235095ms 934.440734ms 942.832325ms 949.136847ms 952.754405ms 954.20654ms 954.651105ms 955.485829ms 959.866848ms 962.609279ms 968.610395ms 973.031792ms 977.466275ms 980.360273ms 981.782358ms 986.002245ms 987.130892ms 987.686253ms 991.605584ms 993.912426ms 1.002192233s 1.004139719s 1.019995204s 1.028549236s 1.030152115s 1.032467987s 1.037103923s 1.042898905s 1.044415095s 1.050736929s 1.054755977s 1.058425371s 1.066875805s 1.0765249s 1.077974491s 1.082190664s 1.083333948s 1.096058874s 1.11134504s 1.115971336s 1.124399296s 1.129841826s 1.13554341s 1.137490429s 1.137754499s 1.14232597s 1.143223087s 1.151282104s 1.152683402s 1.159682728s 1.16858279s 1.187209935s 1.189664363s 1.196974154s 1.214430935s 1.228049601s 1.240445327s 1.255511821s 1.290292463s 1.291790978s 1.302958133s 1.323262268s] Jan 13 06:38:14.645: INFO: 50 %ile: 875.589943ms Jan 13 06:38:14.645: INFO: 90 %ile: 1.13554341s Jan 13 06:38:14.646: INFO: 99 %ile: 1.302958133s Jan 13 06:38:14.646: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:38:14.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-3458" for this suite. • [SLOW TEST:15.724 seconds] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":309,"completed":49,"skipped":851,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:38:14.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 13 06:38:18.858: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:38:18.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2328" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":309,"completed":50,"skipped":862,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:38:19.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating the pod Jan 13 06:38:23.675: INFO: Successfully updated pod "annotationupdateee37c267-1836-43d9-b23c-f974c2f26247" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:38:27.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3655" for this suite. • [SLOW TEST:8.832 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":309,"completed":51,"skipped":869,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:38:27.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Jan 13 06:38:27.961: INFO: Created pod &Pod{ObjectMeta:{test-dns-nameservers dns-5506 1817f005-1ce9-4f49-b849-d3e3d66debb9 490639 0 2021-01-13 06:38:27 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2021-01-13 06:38:27 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5dr8m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5dr8m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5dr8m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 13 06:38:28.003: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Jan 13 06:38:30.219: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Jan 13 06:38:32.048: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Jan 13 06:38:34.016: INFO: The status of Pod test-dns-nameservers is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Jan 13 06:38:34.016: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-5506 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 06:38:34.017: INFO: >>> kubeConfig: /root/.kube/config I0113 06:38:34.107422 10 log.go:181] (0x4000d57550) (0x400104f7c0) Create stream I0113 06:38:34.107545 10 log.go:181] (0x4000d57550) (0x400104f7c0) Stream added, broadcasting: 1 I0113 06:38:34.110335 10 log.go:181] (0x4000d57550) Reply frame received for 1 I0113 06:38:34.110451 10 log.go:181] (0x4000d57550) (0x4002051040) Create stream I0113 06:38:34.110506 10 log.go:181] (0x4000d57550) (0x4002051040) Stream added, broadcasting: 3 I0113 06:38:34.111459 10 log.go:181] (0x4000d57550) Reply frame received for 3 I0113 06:38:34.111558 10 log.go:181] (0x4000d57550) (0x40020510e0) Create stream I0113 06:38:34.111613 10 log.go:181] (0x4000d57550) (0x40020510e0) Stream added, broadcasting: 5 I0113 06:38:34.112721 10 log.go:181] (0x4000d57550) Reply frame received for 5 I0113 06:38:34.177771 10 log.go:181] (0x4000d57550) Data frame received for 3 I0113 06:38:34.177956 10 log.go:181] (0x4002051040) (3) Data frame handling I0113 06:38:34.178114 10 log.go:181] (0x4002051040) (3) Data frame sent I0113 06:38:34.178873 10 log.go:181] (0x4000d57550) Data frame received for 5 I0113 06:38:34.178966 10 log.go:181] (0x40020510e0) (5) Data frame handling I0113 06:38:34.179052 10 log.go:181] (0x4000d57550) Data frame received for 3 I0113 06:38:34.179148 10 log.go:181] (0x4002051040) (3) Data frame handling I0113 06:38:34.180329 10 log.go:181] (0x4000d57550) Data frame received for 1 I0113 06:38:34.180404 10 log.go:181] (0x400104f7c0) (1) Data frame handling I0113 06:38:34.180500 10 log.go:181] (0x400104f7c0) (1) Data frame sent I0113 06:38:34.180595 10 log.go:181] (0x4000d57550) (0x400104f7c0) Stream removed, broadcasting: 1 I0113 06:38:34.180689 10 log.go:181] (0x4000d57550) Go away received I0113 06:38:34.180993 10 log.go:181] (0x4000d57550) (0x400104f7c0) Stream removed, broadcasting: 1 I0113 06:38:34.181073 10 log.go:181] (0x4000d57550) (0x4002051040) Stream removed, broadcasting: 3 I0113 06:38:34.181137 10 log.go:181] (0x4000d57550) (0x40020510e0) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Jan 13 06:38:34.181: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-5506 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 06:38:34.181: INFO: >>> kubeConfig: /root/.kube/config I0113 06:38:34.355539 10 log.go:181] (0x4000f7b080) (0x40009d21e0) Create stream I0113 06:38:34.355761 10 log.go:181] (0x4000f7b080) (0x40009d21e0) Stream added, broadcasting: 1 I0113 06:38:34.358927 10 log.go:181] (0x4000f7b080) Reply frame received for 1 I0113 06:38:34.359076 10 log.go:181] (0x4000f7b080) (0x4003d50000) Create stream I0113 06:38:34.359155 10 log.go:181] (0x4000f7b080) (0x4003d50000) Stream added, broadcasting: 3 I0113 06:38:34.360378 10 log.go:181] (0x4000f7b080) Reply frame received for 3 I0113 06:38:34.360512 10 log.go:181] (0x4000f7b080) (0x40009d2460) Create stream I0113 06:38:34.360603 10 log.go:181] (0x4000f7b080) (0x40009d2460) Stream added, broadcasting: 5 I0113 06:38:34.362802 10 log.go:181] (0x4000f7b080) Reply frame received for 5 I0113 06:38:34.447729 10 log.go:181] (0x4000f7b080) Data frame received for 3 I0113 06:38:34.447875 10 log.go:181] (0x4003d50000) (3) Data frame handling I0113 06:38:34.447993 10 log.go:181] (0x4003d50000) (3) Data frame sent I0113 06:38:34.449534 10 log.go:181] (0x4000f7b080) Data frame received for 5 I0113 06:38:34.449682 10 log.go:181] (0x40009d2460) (5) Data frame handling I0113 06:38:34.449834 10 log.go:181] (0x4000f7b080) Data frame received for 3 I0113 06:38:34.450008 10 log.go:181] (0x4003d50000) (3) Data frame handling I0113 06:38:34.451413 10 log.go:181] (0x4000f7b080) Data frame received for 1 I0113 06:38:34.451548 10 log.go:181] (0x40009d21e0) (1) Data frame handling I0113 06:38:34.451698 10 log.go:181] (0x40009d21e0) (1) Data frame sent I0113 06:38:34.451834 10 log.go:181] (0x4000f7b080) (0x40009d21e0) Stream removed, broadcasting: 1 I0113 06:38:34.452001 10 log.go:181] (0x4000f7b080) Go away received I0113 06:38:34.452439 10 log.go:181] (0x4000f7b080) (0x40009d21e0) Stream removed, broadcasting: 1 I0113 06:38:34.452610 10 log.go:181] (0x4000f7b080) (0x4003d50000) Stream removed, broadcasting: 3 I0113 06:38:34.452731 10 log.go:181] (0x4000f7b080) (0x40009d2460) Stream removed, broadcasting: 5 Jan 13 06:38:34.453: INFO: Deleting pod test-dns-nameservers... [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:38:34.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5506" for this suite. • [SLOW TEST:6.826 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should support configurable pod DNS nameservers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":309,"completed":52,"skipped":882,"failed":0} SSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:38:34.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Jan 13 06:38:35.206: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Registering the sample API server. Jan 13 06:38:38.118: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Jan 13 06:38:40.601: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746116718, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746116718, loc:(*time.Location)(0x7089440)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746116718, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746116718, loc:(*time.Location)(0x7089440)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67dc674868\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 13 06:38:42.612: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746116718, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746116718, loc:(*time.Location)(0x7089440)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746116718, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746116718, loc:(*time.Location)(0x7089440)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67dc674868\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 13 06:38:45.428: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746116718, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746116718, loc:(*time.Location)(0x7089440)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746116718, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746116718, loc:(*time.Location)(0x7089440)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67dc674868\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 13 06:38:47.152: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746116718, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746116718, loc:(*time.Location)(0x7089440)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746116718, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746116718, loc:(*time.Location)(0x7089440)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67dc674868\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 13 06:38:48.606: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746116718, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746116718, loc:(*time.Location)(0x7089440)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746116718, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746116718, loc:(*time.Location)(0x7089440)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67dc674868\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 13 06:38:50.610: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746116718, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746116718, loc:(*time.Location)(0x7089440)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746116718, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746116718, loc:(*time.Location)(0x7089440)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67dc674868\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 13 06:38:53.793: INFO: Waited 1.132283822s for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:38:54.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-2302" for this suite. • [SLOW TEST:19.926 seconds] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":309,"completed":53,"skipped":886,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:38:54.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name projected-configmap-test-volume-91071e94-0a54-4d21-8e66-ec07a0d5c8fe STEP: Creating a pod to test consume configMaps Jan 13 06:38:55.063: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b00e8282-43b4-4612-a0ed-2cfea81a028c" in namespace "projected-8164" to be "Succeeded or Failed" Jan 13 06:38:55.078: INFO: Pod "pod-projected-configmaps-b00e8282-43b4-4612-a0ed-2cfea81a028c": Phase="Pending", Reason="", readiness=false. Elapsed: 14.672901ms Jan 13 06:38:57.086: INFO: Pod "pod-projected-configmaps-b00e8282-43b4-4612-a0ed-2cfea81a028c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023242723s Jan 13 06:38:59.095: INFO: Pod "pod-projected-configmaps-b00e8282-43b4-4612-a0ed-2cfea81a028c": Phase="Running", Reason="", readiness=true. Elapsed: 4.03171674s Jan 13 06:39:01.104: INFO: Pod "pod-projected-configmaps-b00e8282-43b4-4612-a0ed-2cfea81a028c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.041065423s STEP: Saw pod success Jan 13 06:39:01.104: INFO: Pod "pod-projected-configmaps-b00e8282-43b4-4612-a0ed-2cfea81a028c" satisfied condition "Succeeded or Failed" Jan 13 06:39:01.110: INFO: Trying to get logs from node leguer-worker2 pod pod-projected-configmaps-b00e8282-43b4-4612-a0ed-2cfea81a028c container agnhost-container: STEP: delete the pod Jan 13 06:39:01.150: INFO: Waiting for pod pod-projected-configmaps-b00e8282-43b4-4612-a0ed-2cfea81a028c to disappear Jan 13 06:39:01.157: INFO: Pod pod-projected-configmaps-b00e8282-43b4-4612-a0ed-2cfea81a028c no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:39:01.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8164" for this suite. • [SLOW TEST:6.580 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":309,"completed":54,"skipped":920,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:39:01.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if v1 is in available api versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: validating api versions Jan 13 06:39:01.238: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7512 api-versions' Jan 13 06:39:02.603: INFO: stderr: "" Jan 13 06:39:02.604: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nflowcontrol.apiserver.k8s.io/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1\nnode.k8s.io/v1beta1\npingcap.com/v1alpha1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:39:02.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7512" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":309,"completed":55,"skipped":931,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:39:02.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 06:39:02.707: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jan 13 06:39:25.517: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2590 --namespace=crd-publish-openapi-2590 create -f -' Jan 13 06:39:36.386: INFO: stderr: "" Jan 13 06:39:36.386: INFO: stdout: "e2e-test-crd-publish-openapi-9732-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jan 13 06:39:36.386: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2590 --namespace=crd-publish-openapi-2590 delete e2e-test-crd-publish-openapi-9732-crds test-cr' Jan 13 06:39:37.704: INFO: stderr: "" Jan 13 06:39:37.704: INFO: stdout: "e2e-test-crd-publish-openapi-9732-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Jan 13 06:39:37.704: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2590 --namespace=crd-publish-openapi-2590 apply -f -' Jan 13 06:39:40.516: INFO: stderr: "" Jan 13 06:39:40.516: INFO: stdout: "e2e-test-crd-publish-openapi-9732-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jan 13 06:39:40.517: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2590 --namespace=crd-publish-openapi-2590 delete e2e-test-crd-publish-openapi-9732-crds test-cr' Jan 13 06:39:41.939: INFO: stderr: "" Jan 13 06:39:41.939: INFO: stdout: "e2e-test-crd-publish-openapi-9732-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Jan 13 06:39:41.939: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2590 explain e2e-test-crd-publish-openapi-9732-crds' Jan 13 06:39:46.237: INFO: stderr: "" Jan 13 06:39:46.237: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9732-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:40:09.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2590" for this suite. • [SLOW TEST:66.971 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":309,"completed":56,"skipped":935,"failed":0} S ------------------------------ [k8s.io] Pods should run through the lifecycle of Pods and PodStatus [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:40:09.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should run through the lifecycle of Pods and PodStatus [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a Pod with a static label STEP: watching for Pod to be ready Jan 13 06:40:09.750: INFO: observed Pod pod-test in namespace pods-312 in phase Pending conditions [] Jan 13 06:40:09.760: INFO: observed Pod pod-test in namespace pods-312 in phase Pending conditions [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 06:40:09 +0000 UTC }] Jan 13 06:40:09.792: INFO: observed Pod pod-test in namespace pods-312 in phase Pending conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 06:40:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 06:40:09 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 06:40:09 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 06:40:09 +0000 UTC }] STEP: patching the Pod with a new Label and updated data Jan 13 06:40:13.354: INFO: observed event type ADDED STEP: getting the Pod and ensuring that it's patched STEP: getting the PodStatus STEP: replacing the Pod's status Ready condition to False STEP: check the Pod again to ensure its Ready conditions are False STEP: deleting the Pod via a Collection with a LabelSelector STEP: watching for the Pod to be deleted Jan 13 06:40:13.476: INFO: observed event type ADDED Jan 13 06:40:13.476: INFO: observed event type MODIFIED Jan 13 06:40:13.477: INFO: observed event type MODIFIED Jan 13 06:40:13.477: INFO: observed event type MODIFIED Jan 13 06:40:13.477: INFO: observed event type MODIFIED Jan 13 06:40:13.478: INFO: observed event type MODIFIED Jan 13 06:40:13.478: INFO: observed event type MODIFIED [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:40:13.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-312" for this suite. •{"msg":"PASSED [k8s.io] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":309,"completed":57,"skipped":936,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:40:13.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:40:24.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5358" for this suite. • [SLOW TEST:11.284 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":309,"completed":58,"skipped":967,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should delete a collection of pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:40:24.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should delete a collection of pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Create set of pods Jan 13 06:40:24.878: INFO: created test-pod-1 Jan 13 06:40:24.906: INFO: created test-pod-2 Jan 13 06:40:24.927: INFO: created test-pod-3 STEP: waiting for all 3 pods to be located STEP: waiting for all pods to be deleted [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:40:25.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3709" for this suite. •{"msg":"PASSED [k8s.io] Pods should delete a collection of pods [Conformance]","total":309,"completed":59,"skipped":1016,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:40:25.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name cm-test-opt-del-081c27d0-15c9-42bd-9da9-027225f07d8f STEP: Creating configMap with name cm-test-opt-upd-0edae747-9c19-4c79-9eda-3af4a2b28f4d STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-081c27d0-15c9-42bd-9da9-027225f07d8f STEP: Updating configmap cm-test-opt-upd-0edae747-9c19-4c79-9eda-3af4a2b28f4d STEP: Creating configMap with name cm-test-opt-create-630028ad-4c6b-4513-86c3-f17433c49b27 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:40:33.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9670" for this suite. • [SLOW TEST:8.265 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":309,"completed":60,"skipped":1034,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:40:33.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:40:33.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7545" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":309,"completed":61,"skipped":1054,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:40:33.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 06:40:33.757: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:40:34.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6188" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":309,"completed":62,"skipped":1057,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:40:34.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 [It] should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating server pod server in namespace prestop-4497 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-4497 STEP: Deleting pre-stop pod Jan 13 06:40:48.151: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:40:48.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-4497" for this suite. • [SLOW TEST:13.489 seconds] [k8s.io] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":309,"completed":63,"skipped":1085,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:40:48.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:40:48.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-813" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":309,"completed":64,"skipped":1117,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:40:48.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0113 06:40:50.523461 10 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Jan 13 06:41:52.856: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:41:52.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-342" for this suite. • [SLOW TEST:64.002 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":309,"completed":65,"skipped":1145,"failed":0} SSSSSSSSSSS ------------------------------ [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:41:52.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a test event STEP: listing events in all namespaces STEP: listing events in test namespace STEP: listing events with field selection filtering on source STEP: listing events with field selection filtering on reportingController STEP: getting the test event STEP: patching the test event STEP: getting the test event STEP: updating the test event STEP: getting the test event STEP: deleting the test event STEP: listing events in all namespaces STEP: listing events in test namespace [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:41:53.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-6887" for this suite. •{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":309,"completed":66,"skipped":1156,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:41:53.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating projection with configMap that has name projected-configmap-test-upd-84878eb6-c6ae-413c-96d4-a7ef99870c04 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-84878eb6-c6ae-413c-96d4-a7ef99870c04 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:41:59.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2433" for this suite. • [SLOW TEST:6.244 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":309,"completed":67,"skipped":1163,"failed":0} SSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:41:59.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:42:21.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-177" for this suite. • [SLOW TEST:22.178 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":309,"completed":68,"skipped":1168,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:42:21.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 06:42:22.454: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-2404 create -f -' Jan 13 06:42:24.610: INFO: stderr: "" Jan 13 06:42:24.611: INFO: stdout: "replicationcontroller/agnhost-primary created\n" Jan 13 06:42:24.611: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-2404 create -f -' Jan 13 06:42:29.550: INFO: stderr: "" Jan 13 06:42:29.550: INFO: stdout: "service/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Jan 13 06:42:30.704: INFO: Selector matched 1 pods for map[app:agnhost] Jan 13 06:42:30.704: INFO: Found 1 / 1 Jan 13 06:42:30.704: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 13 06:42:30.745: INFO: Selector matched 1 pods for map[app:agnhost] Jan 13 06:42:30.746: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 13 06:42:30.746: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-2404 describe pod agnhost-primary-qc4wf' Jan 13 06:42:32.211: INFO: stderr: "" Jan 13 06:42:32.211: INFO: stdout: "Name: agnhost-primary-qc4wf\nNamespace: kubectl-2404\nPriority: 0\nNode: leguer-worker/172.18.0.13\nStart Time: Wed, 13 Jan 2021 06:42:24 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: \nStatus: Running\nIP: 10.244.2.198\nIPs:\n IP: 10.244.2.198\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: containerd://684c4be16c7ca011520ed960e6a2c0bc8dd3e1cd0ff3cd670ae4cdcc2e3c267d\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.21\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Wed, 13 Jan 2021 06:42:29 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-x7dnj (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-x7dnj:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-x7dnj\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 8s default-scheduler Successfully assigned kubectl-2404/agnhost-primary-qc4wf to leguer-worker\n Normal Pulled 7s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.21\" already present on machine\n Normal Created 4s kubelet Created container agnhost-primary\n Normal Started 3s kubelet Started container agnhost-primary\n" Jan 13 06:42:32.213: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-2404 describe rc agnhost-primary' Jan 13 06:42:33.665: INFO: stderr: "" Jan 13 06:42:33.665: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-2404\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.21\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 9s replication-controller Created pod: agnhost-primary-qc4wf\n" Jan 13 06:42:33.666: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-2404 describe service agnhost-primary' Jan 13 06:42:34.979: INFO: stderr: "" Jan 13 06:42:34.979: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-2404\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP Families: \nIP: 10.96.253.135\nIPs: 10.96.253.135\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.198:6379\nSession Affinity: None\nEvents: \n" Jan 13 06:42:34.990: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-2404 describe node leguer-control-plane' Jan 13 06:42:36.607: INFO: stderr: "" Jan 13 06:42:36.607: INFO: stdout: "Name: leguer-control-plane\nRoles: control-plane,master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=leguer-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/control-plane=\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 10 Jan 2021 17:37:43 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: leguer-control-plane\n AcquireTime: \n RenewTime: Wed, 13 Jan 2021 06:42:28 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Wed, 13 Jan 2021 06:38:36 +0000 Sun, 10 Jan 2021 17:37:43 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Wed, 13 Jan 2021 06:38:36 +0000 Sun, 10 Jan 2021 17:37:43 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Wed, 13 Jan 2021 06:38:36 +0000 Sun, 10 Jan 2021 17:37:43 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Wed, 13 Jan 2021 06:38:36 +0000 Sun, 10 Jan 2021 17:38:11 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.17\n Hostname: leguer-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759868Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759868Ki\n pods: 110\nSystem Info:\n Machine ID: 5f1cb3b1931a44e6bb33804f4b6ca7e5\n System UUID: c2287e83-2c9f-458f-8294-12965d8d5e30\n Boot ID: b267d78b-f69b-4338-80e8-3f4944338e5d\n Kernel Version: 4.15.0-118-generic\n OS Image: Ubuntu Groovy Gorilla (development branch)\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.4.0\n Kubelet Version: v1.20.0\n Kube-Proxy Version: v1.20.0\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nProviderID: kind://docker/leguer/leguer-control-plane\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-74ff55c5b-flmf7 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 2d13h\n kube-system coredns-74ff55c5b-whxn7 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 2d13h\n kube-system etcd-leguer-control-plane 100m (0%) 0 (0%) 100Mi (0%) 0 (0%) 2d13h\n kube-system kindnet-rjz52 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 2d13h\n kube-system kube-apiserver-leguer-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 2d13h\n kube-system kube-controller-manager-leguer-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 2d13h\n kube-system kube-proxy-chqjl 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d13h\n kube-system kube-scheduler-leguer-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 2d13h\n local-path-storage local-path-provisioner-78776bfc44-45fhs 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d13h\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 950m (5%) 100m (0%)\n memory 290Mi (0%) 390Mi (0%)\n ephemeral-storage 100Mi (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Jan 13 06:42:36.609: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-2404 describe namespace kubectl-2404' Jan 13 06:42:38.046: INFO: stderr: "" Jan 13 06:42:38.046: INFO: stdout: "Name: kubectl-2404\nLabels: e2e-framework=kubectl\n e2e-run=9230875c-c25c-4b62-91bc-9048446d6322\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:42:38.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2404" for this suite. • [SLOW TEST:16.543 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1090 should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":309,"completed":69,"skipped":1183,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:42:38.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-9jvgr in namespace proxy-5030 I0113 06:42:38.231840 10 runners.go:190] Created replication controller with name: proxy-service-9jvgr, namespace: proxy-5030, replica count: 1 I0113 06:42:39.282829 10 runners.go:190] proxy-service-9jvgr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0113 06:42:40.283507 10 runners.go:190] proxy-service-9jvgr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0113 06:42:41.283941 10 runners.go:190] proxy-service-9jvgr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0113 06:42:42.284558 10 runners.go:190] proxy-service-9jvgr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0113 06:42:43.285134 10 runners.go:190] proxy-service-9jvgr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0113 06:42:44.285787 10 runners.go:190] proxy-service-9jvgr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0113 06:42:45.286231 10 runners.go:190] proxy-service-9jvgr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0113 06:42:46.286747 10 runners.go:190] proxy-service-9jvgr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0113 06:42:47.287242 10 runners.go:190] proxy-service-9jvgr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0113 06:42:48.287712 10 runners.go:190] proxy-service-9jvgr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0113 06:42:49.288336 10 runners.go:190] proxy-service-9jvgr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0113 06:42:50.289136 10 runners.go:190] proxy-service-9jvgr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0113 06:42:51.289692 10 runners.go:190] proxy-service-9jvgr Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 13 06:42:51.304: INFO: setup took 13.132091625s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Jan 13 06:42:51.315: INFO: (0) /api/v1/namespaces/proxy-5030/pods/http:proxy-service-9jvgr-q2p7f:162/proxy/: bar (200; 8.952686ms) Jan 13 06:42:51.321: INFO: (0) /api/v1/namespaces/proxy-5030/services/http:proxy-service-9jvgr:portname2/proxy/: bar (200; 15.143341ms) Jan 13 06:42:51.321: INFO: (0) /api/v1/namespaces/proxy-5030/pods/http:proxy-service-9jvgr-q2p7f:160/proxy/: foo (200; 14.92195ms) Jan 13 06:42:51.321: INFO: (0) /api/v1/namespaces/proxy-5030/pods/proxy-service-9jvgr-q2p7f:162/proxy/: bar (200; 15.127732ms) Jan 13 06:42:51.321: INFO: (0) /api/v1/namespaces/proxy-5030/pods/proxy-service-9jvgr-q2p7f/proxy/: test (200; 15.13199ms) Jan 13 06:42:51.321: INFO: (0) /api/v1/namespaces/proxy-5030/pods/https:proxy-service-9jvgr-q2p7f:462/proxy/: tls qux (200; 14.900972ms) Jan 13 06:42:51.321: INFO: (0) /api/v1/namespaces/proxy-5030/pods/http:proxy-service-9jvgr-q2p7f:1080/proxy/: ... (200; 15.2075ms) Jan 13 06:42:51.321: INFO: (0) /api/v1/namespaces/proxy-5030/services/proxy-service-9jvgr:portname1/proxy/: foo (200; 15.458485ms) Jan 13 06:42:51.321: INFO: (0) /api/v1/namespaces/proxy-5030/services/proxy-service-9jvgr:portname2/proxy/: bar (200; 14.761041ms) Jan 13 06:42:51.321: INFO: (0) /api/v1/namespaces/proxy-5030/services/http:proxy-service-9jvgr:portname1/proxy/: foo (200; 15.255345ms) Jan 13 06:42:51.321: INFO: (0) /api/v1/namespaces/proxy-5030/pods/proxy-service-9jvgr-q2p7f:160/proxy/: foo (200; 15.434013ms) Jan 13 06:42:51.321: INFO: (0) /api/v1/namespaces/proxy-5030/services/https:proxy-service-9jvgr:tlsportname2/proxy/: tls qux (200; 13.251075ms) Jan 13 06:42:51.321: INFO: (0) /api/v1/namespaces/proxy-5030/pods/proxy-service-9jvgr-q2p7f:1080/proxy/: test<... (200; 15.576825ms) Jan 13 06:42:51.322: INFO: (0) /api/v1/namespaces/proxy-5030/pods/https:proxy-service-9jvgr-q2p7f:443/proxy/: test<... (200; 5.003804ms) Jan 13 06:42:51.328: INFO: (1) /api/v1/namespaces/proxy-5030/pods/proxy-service-9jvgr-q2p7f/proxy/: test (200; 4.75072ms) Jan 13 06:42:51.329: INFO: (1) /api/v1/namespaces/proxy-5030/services/http:proxy-service-9jvgr:portname2/proxy/: bar (200; 5.587468ms) Jan 13 06:42:51.329: INFO: (1) /api/v1/namespaces/proxy-5030/pods/http:proxy-service-9jvgr-q2p7f:1080/proxy/: ... (200; 5.131353ms) Jan 13 06:42:51.329: INFO: (1) /api/v1/namespaces/proxy-5030/pods/http:proxy-service-9jvgr-q2p7f:162/proxy/: bar (200; 5.132049ms) Jan 13 06:42:51.329: INFO: (1) /api/v1/namespaces/proxy-5030/pods/proxy-service-9jvgr-q2p7f:160/proxy/: foo (200; 5.796155ms) Jan 13 06:42:51.329: INFO: (1) /api/v1/namespaces/proxy-5030/services/proxy-service-9jvgr:portname2/proxy/: bar (200; 5.179466ms) Jan 13 06:42:51.329: INFO: (1) /api/v1/namespaces/proxy-5030/pods/https:proxy-service-9jvgr-q2p7f:443/proxy/: ... (200; 6.23416ms) Jan 13 06:42:51.339: INFO: (2) /api/v1/namespaces/proxy-5030/services/https:proxy-service-9jvgr:tlsportname2/proxy/: tls qux (200; 6.671453ms) Jan 13 06:42:51.339: INFO: (2) /api/v1/namespaces/proxy-5030/pods/proxy-service-9jvgr-q2p7f:1080/proxy/: test<... (200; 6.585537ms) Jan 13 06:42:51.339: INFO: (2) /api/v1/namespaces/proxy-5030/pods/proxy-service-9jvgr-q2p7f/proxy/: test (200; 7.161053ms) Jan 13 06:42:51.340: INFO: (2) /api/v1/namespaces/proxy-5030/pods/https:proxy-service-9jvgr-q2p7f:462/proxy/: tls qux (200; 7.406945ms) Jan 13 06:42:51.340: INFO: (2) /api/v1/namespaces/proxy-5030/services/http:proxy-service-9jvgr:portname2/proxy/: bar (200; 7.592923ms) Jan 13 06:42:51.340: INFO: (2) /api/v1/namespaces/proxy-5030/services/http:proxy-service-9jvgr:portname1/proxy/: foo (200; 7.389208ms) Jan 13 06:42:51.340: INFO: (2) /api/v1/namespaces/proxy-5030/services/proxy-service-9jvgr:portname2/proxy/: bar (200; 7.380261ms) Jan 13 06:42:51.343: INFO: (3) /api/v1/namespaces/proxy-5030/pods/proxy-service-9jvgr-q2p7f/proxy/: test (200; 3.423376ms) Jan 13 06:42:51.344: INFO: (3) /api/v1/namespaces/proxy-5030/pods/http:proxy-service-9jvgr-q2p7f:1080/proxy/: ... (200; 3.439776ms) Jan 13 06:42:51.344: INFO: (3) /api/v1/namespaces/proxy-5030/pods/proxy-service-9jvgr-q2p7f:1080/proxy/: test<... (200; 3.542368ms) Jan 13 06:42:51.345: INFO: (3) /api/v1/namespaces/proxy-5030/pods/http:proxy-service-9jvgr-q2p7f:162/proxy/: bar (200; 4.422655ms) Jan 13 06:42:51.345: INFO: (3) /api/v1/namespaces/proxy-5030/pods/http:proxy-service-9jvgr-q2p7f:160/proxy/: foo (200; 4.625128ms) Jan 13 06:42:51.345: INFO: (3) /api/v1/namespaces/proxy-5030/services/http:proxy-service-9jvgr:portname2/proxy/: bar (200; 4.176744ms) Jan 13 06:42:51.345: INFO: (3) /api/v1/namespaces/proxy-5030/pods/proxy-service-9jvgr-q2p7f:160/proxy/: foo (200; 4.788643ms) Jan 13 06:42:51.345: INFO: (3) /api/v1/namespaces/proxy-5030/pods/https:proxy-service-9jvgr-q2p7f:460/proxy/: tls baz (200; 5.014913ms) Jan 13 06:42:51.345: INFO: (3) /api/v1/namespaces/proxy-5030/pods/https:proxy-service-9jvgr-q2p7f:462/proxy/: tls qux (200; 4.76066ms) Jan 13 06:42:51.345: INFO: (3) /api/v1/namespaces/proxy-5030/services/http:proxy-service-9jvgr:portname1/proxy/: foo (200; 4.561144ms) Jan 13 06:42:51.346: INFO: (3) /api/v1/namespaces/proxy-5030/services/proxy-service-9jvgr:portname2/proxy/: bar (200; 6.157935ms) Jan 13 06:42:51.346: INFO: (3) /api/v1/namespaces/proxy-5030/pods/proxy-service-9jvgr-q2p7f:162/proxy/: bar (200; 5.966449ms) Jan 13 06:42:51.347: INFO: (3) /api/v1/namespaces/proxy-5030/services/proxy-service-9jvgr:portname1/proxy/: foo (200; 6.16343ms) Jan 13 06:42:51.347: INFO: (3) /api/v1/namespaces/proxy-5030/pods/https:proxy-service-9jvgr-q2p7f:443/proxy/: test<... (200; 6.022351ms) Jan 13 06:42:51.353: INFO: (4) /api/v1/namespaces/proxy-5030/pods/http:proxy-service-9jvgr-q2p7f:1080/proxy/: ... (200; 5.686703ms) Jan 13 06:42:51.355: INFO: (4) /api/v1/namespaces/proxy-5030/services/https:proxy-service-9jvgr:tlsportname2/proxy/: tls qux (200; 7.10878ms) Jan 13 06:42:51.355: INFO: (4) /api/v1/namespaces/proxy-5030/pods/proxy-service-9jvgr-q2p7f/proxy/: test (200; 7.870935ms) Jan 13 06:42:51.355: INFO: (4) /api/v1/namespaces/proxy-5030/pods/http:proxy-service-9jvgr-q2p7f:162/proxy/: bar (200; 7.815275ms) Jan 13 06:42:51.355: INFO: (4) /api/v1/namespaces/proxy-5030/pods/https:proxy-service-9jvgr-q2p7f:460/proxy/: tls baz (200; 7.843345ms) Jan 13 06:42:51.355: INFO: (4) /api/v1/namespaces/proxy-5030/pods/http:proxy-service-9jvgr-q2p7f:160/proxy/: foo (200; 8.021062ms) Jan 13 06:42:51.356: INFO: (4) /api/v1/namespaces/proxy-5030/services/proxy-service-9jvgr:portname2/proxy/: bar (200; 8.387945ms) Jan 13 06:42:51.356: INFO: (4) /api/v1/namespaces/proxy-5030/services/http:proxy-service-9jvgr:portname1/proxy/: foo (200; 8.749218ms) Jan 13 06:42:51.356: INFO: (4) /api/v1/namespaces/proxy-5030/services/https:proxy-service-9jvgr:tlsportname1/proxy/: tls baz (200; 8.749036ms) Jan 13 06:42:51.356: INFO: (4) /api/v1/namespaces/proxy-5030/services/proxy-service-9jvgr:portname1/proxy/: foo (200; 8.632152ms) Jan 13 06:42:51.356: INFO: (4) /api/v1/namespaces/proxy-5030/services/http:proxy-service-9jvgr:portname2/proxy/: bar (200; 9.109972ms) Jan 13 06:42:51.365: INFO: (5) /api/v1/namespaces/proxy-5030/pods/http:proxy-service-9jvgr-q2p7f:162/proxy/: bar (200; 7.939305ms) Jan 13 06:42:51.367: INFO: (5) /api/v1/namespaces/proxy-5030/pods/http:proxy-service-9jvgr-q2p7f:1080/proxy/: ... (200; 9.903972ms) Jan 13 06:42:51.367: INFO: (5) /api/v1/namespaces/proxy-5030/pods/proxy-service-9jvgr-q2p7f/proxy/: test (200; 10.013463ms) Jan 13 06:42:51.367: INFO: (5) /api/v1/namespaces/proxy-5030/services/http:proxy-service-9jvgr:portname1/proxy/: foo (200; 10.09933ms) Jan 13 06:42:51.367: INFO: (5) /api/v1/namespaces/proxy-5030/services/proxy-service-9jvgr:portname1/proxy/: foo (200; 10.555395ms) Jan 13 06:42:51.367: INFO: (5) /api/v1/namespaces/proxy-5030/services/https:proxy-service-9jvgr:tlsportname1/proxy/: tls baz (200; 10.6562ms) Jan 13 06:42:51.367: INFO: (5) /api/v1/namespaces/proxy-5030/pods/proxy-service-9jvgr-q2p7f:162/proxy/: bar (200; 10.498596ms) Jan 13 06:42:51.367: INFO: (5) /api/v1/namespaces/proxy-5030/services/proxy-service-9jvgr:portname2/proxy/: bar (200; 10.885619ms) Jan 13 06:42:51.368: INFO: (5) /api/v1/namespaces/proxy-5030/services/https:proxy-service-9jvgr:tlsportname2/proxy/: tls qux (200; 11.167694ms) Jan 13 06:42:51.368: INFO: (5) /api/v1/namespaces/proxy-5030/pods/proxy-service-9jvgr-q2p7f:160/proxy/: foo (200; 10.857504ms) Jan 13 06:42:51.368: INFO: (5) /api/v1/namespaces/proxy-5030/pods/http:proxy-service-9jvgr-q2p7f:160/proxy/: foo (200; 10.68395ms) Jan 13 06:42:51.368: INFO: (5) /api/v1/namespaces/proxy-5030/pods/proxy-service-9jvgr-q2p7f:1080/proxy/: test<... (200; 11.222757ms) Jan 13 06:42:51.368: INFO: (5) /api/v1/namespaces/proxy-5030/pods/https:proxy-service-9jvgr-q2p7f:460/proxy/: tls baz (200; 11.584295ms) Jan 13 06:42:51.368: INFO: (5) /api/v1/namespaces/proxy-5030/pods/https:proxy-service-9jvgr-q2p7f:462/proxy/: tls qux (200; 11.486201ms) Jan 13 06:42:51.368: INFO: (5) /api/v1/namespaces/proxy-5030/pods/https:proxy-service-9jvgr-q2p7f:443/proxy/: test<... (200; 4.240807ms) Jan 13 06:42:51.375: INFO: (6) /api/v1/namespaces/proxy-5030/services/proxy-service-9jvgr:portname2/proxy/: bar (200; 4.802108ms) Jan 13 06:42:51.375: INFO: (6) /api/v1/namespaces/proxy-5030/services/https:proxy-service-9jvgr:tlsportname1/proxy/: tls baz (200; 6.364673ms) Jan 13 06:42:51.376: INFO: (6) /api/v1/namespaces/proxy-5030/pods/proxy-service-9jvgr-q2p7f:160/proxy/: foo (200; 3.689638ms) Jan 13 06:42:51.376: INFO: (6) /api/v1/namespaces/proxy-5030/services/https:proxy-service-9jvgr:tlsportname2/proxy/: tls qux (200; 6.301901ms) Jan 13 06:42:51.376: INFO: (6) /api/v1/namespaces/proxy-5030/pods/proxy-service-9jvgr-q2p7f/proxy/: test (200; 3.527625ms) Jan 13 06:42:51.376: INFO: (6) /api/v1/namespaces/proxy-5030/pods/http:proxy-service-9jvgr-q2p7f:1080/proxy/: ... (200; 3.318078ms) Jan 13 06:42:51.377: INFO: (6) /api/v1/namespaces/proxy-5030/pods/proxy-service-9jvgr-q2p7f:162/proxy/: bar (200; 3.878683ms) Jan 13 06:42:51.377: INFO: (6) /api/v1/namespaces/proxy-5030/pods/https:proxy-service-9jvgr-q2p7f:443/proxy/: test<... (200; 4.053644ms) Jan 13 06:42:51.381: INFO: (7) /api/v1/namespaces/proxy-5030/pods/http:proxy-service-9jvgr-q2p7f:160/proxy/: foo (200; 4.352371ms) Jan 13 06:42:51.381: INFO: (7) /api/v1/namespaces/proxy-5030/services/proxy-service-9jvgr:portname2/proxy/: bar (200; 4.39834ms) Jan 13 06:42:51.382: INFO: (7) /api/v1/namespaces/proxy-5030/services/https:proxy-service-9jvgr:tlsportname2/proxy/: tls qux (200; 4.532503ms) Jan 13 06:42:51.382: INFO: (7) /api/v1/namespaces/proxy-5030/services/proxy-service-9jvgr:portname1/proxy/: foo (200; 4.628738ms) Jan 13 06:42:51.382: INFO: (7) /api/v1/namespaces/proxy-5030/pods/https:proxy-service-9jvgr-q2p7f:460/proxy/: tls baz (200; 4.622798ms) Jan 13 06:42:51.382: INFO: (7) /api/v1/namespaces/proxy-5030/pods/proxy-service-9jvgr-q2p7f/proxy/: test (200; 4.748735ms) Jan 13 06:42:51.382: INFO: (7) /api/v1/namespaces/proxy-5030/services/https:proxy-service-9jvgr:tlsportname1/proxy/: tls baz (200; 5.01597ms) Jan 13 06:42:51.382: INFO: (7) /api/v1/namespaces/proxy-5030/pods/http:proxy-service-9jvgr-q2p7f:1080/proxy/: ... (200; 5.006515ms) Jan 13 06:42:51.383: INFO: (7) /api/v1/namespaces/proxy-5030/pods/proxy-service-9jvgr-q2p7f:162/proxy/: bar (200; 5.331765ms) Jan 13 06:42:51.383: INFO: (7) /api/v1/namespaces/proxy-5030/services/http:proxy-service-9jvgr:portname1/proxy/: foo (200; 5.648659ms) Jan 13 06:42:51.383: INFO: (7) /api/v1/namespaces/proxy-5030/pods/https:proxy-service-9jvgr-q2p7f:443/proxy/: test<... (200; 3.779517ms) Jan 13 06:42:51.388: INFO: (8) /api/v1/namespaces/proxy-5030/pods/proxy-service-9jvgr-q2p7f/proxy/: test (200; 3.889969ms) Jan 13 06:42:51.388: INFO: (8) /api/v1/namespaces/proxy-5030/services/https:proxy-service-9jvgr:tlsportname1/proxy/: tls baz (200; 4.336925ms) Jan 13 06:42:51.388: INFO: (8) /api/v1/namespaces/proxy-5030/pods/http:proxy-service-9jvgr-q2p7f:160/proxy/: foo (200; 4.354712ms) Jan 13 06:42:51.388: INFO: (8) /api/v1/namespaces/proxy-5030/pods/proxy-service-9jvgr-q2p7f:162/proxy/: bar (200; 4.633421ms) Jan 13 06:42:51.388: INFO: (8) /api/v1/namespaces/proxy-5030/services/https:proxy-service-9jvgr:tlsportname2/proxy/: tls qux (200; 4.663172ms) Jan 13 06:42:51.389: INFO: (8) /api/v1/namespaces/proxy-5030/services/proxy-service-9jvgr:portname1/proxy/: foo (200; 4.630409ms) Jan 13 06:42:51.389: INFO: (8) /api/v1/namespaces/proxy-5030/services/http:proxy-service-9jvgr:portname1/proxy/: foo (200; 4.876505ms) Jan 13 06:42:51.389: INFO: (8) /api/v1/namespaces/proxy-5030/pods/https:proxy-service-9jvgr-q2p7f:443/proxy/: ... (200; 5.671077ms) Jan 13 06:42:51.390: INFO: (8) /api/v1/namespaces/proxy-5030/pods/https:proxy-service-9jvgr-q2p7f:460/proxy/: tls baz (200; 5.427219ms) Jan 13 06:42:51.390: INFO: (8) /api/v1/namespaces/proxy-5030/pods/proxy-service-9jvgr-q2p7f:160/proxy/: foo (200; 5.818388ms) Jan 13 06:42:51.390: INFO: (8) /api/v1/namespaces/proxy-5030/services/http:proxy-service-9jvgr:portname2/proxy/: bar (200; 5.944233ms) Jan 13 06:42:51.393: INFO: (9) /api/v1/namespaces/proxy-5030/pods/http:proxy-service-9jvgr-q2p7f:162/proxy/: bar (200; 3.435182ms) Jan 13 06:42:51.394: INFO: (9) /api/v1/namespaces/proxy-5030/pods/proxy-service-9jvgr-q2p7f/proxy/: test (200; 4.078596ms) Jan 13 06:42:51.395: INFO: (9) /api/v1/namespaces/proxy-5030/pods/http:proxy-service-9jvgr-q2p7f:160/proxy/: foo (200; 4.880303ms) Jan 13 06:42:51.395: INFO: (9) /api/v1/namespaces/proxy-5030/pods/proxy-service-9jvgr-q2p7f:162/proxy/: bar (200; 4.896928ms) Jan 13 06:42:51.395: INFO: (9) /api/v1/namespaces/proxy-5030/pods/proxy-service-9jvgr-q2p7f:1080/proxy/: test<... (200; 4.994068ms) Jan 13 06:42:51.395: INFO: (9) /api/v1/namespaces/proxy-5030/services/https:proxy-service-9jvgr:tlsportname1/proxy/: tls baz (200; 5.29337ms) Jan 13 06:42:51.396: INFO: (9) /api/v1/namespaces/proxy-5030/pods/https:proxy-service-9jvgr-q2p7f:462/proxy/: tls qux (200; 5.490038ms) Jan 13 06:42:51.396: INFO: (9) /api/v1/namespaces/proxy-5030/pods/proxy-service-9jvgr-q2p7f:160/proxy/: foo (200; 5.511921ms) Jan 13 06:42:51.396: INFO: (9) /api/v1/namespaces/proxy-5030/pods/https:proxy-service-9jvgr-q2p7f:460/proxy/: tls baz (200; 5.686827ms) Jan 13 06:42:51.396: INFO: (9) /api/v1/namespaces/proxy-5030/services/https:proxy-service-9jvgr:tlsportname2/proxy/: tls qux (200; 5.820926ms) Jan 13 06:42:51.396: INFO: (9) /api/v1/namespaces/proxy-5030/pods/http:proxy-service-9jvgr-q2p7f:1080/proxy/: ... (200; 5.971277ms) Jan 13 06:42:51.396: INFO: (9) /api/v1/namespaces/proxy-5030/services/proxy-service-9jvgr:portname1/proxy/: foo (200; 5.97568ms) Jan 13 06:42:51.396: INFO: (9) /api/v1/namespaces/proxy-5030/services/http:proxy-service-9jvgr:portname2/proxy/: bar (200; 6.009927ms) Jan 13 06:42:51.397: INFO: (9) /api/v1/namespaces/proxy-5030/services/proxy-service-9jvgr:portname2/proxy/: bar (200; 6.27712ms) Jan 13 06:42:51.397: INFO: (9) /api/v1/namespaces/proxy-5030/pods/https:proxy-service-9jvgr-q2p7f:443/proxy/: test<... (200; 3.708142ms) Jan 13 06:42:51.401: INFO: (10) /api/v1/namespaces/proxy-5030/pods/proxy-service-9jvgr-q2p7f:162/proxy/: bar (200; 3.848115ms) Jan 13 06:42:51.401: INFO: (10) /api/v1/namespaces/proxy-5030/pods/https:proxy-service-9jvgr-q2p7f:460/proxy/: tls baz (200; 4.014192ms) Jan 13 06:42:51.401: INFO: (10) /api/v1/namespaces/proxy-5030/pods/https:proxy-service-9jvgr-q2p7f:462/proxy/: tls qux (200; 4.607146ms) Jan 13 06:42:51.402: INFO: (10) /api/v1/namespaces/proxy-5030/services/http:proxy-service-9jvgr:portname1/proxy/: foo (200; 5.039343ms) Jan 13 06:42:51.402: INFO: (10) /api/v1/namespaces/proxy-5030/pods/http:proxy-service-9jvgr-q2p7f:162/proxy/: bar (200; 4.85531ms) Jan 13 06:42:51.402: INFO: (10) /api/v1/namespaces/proxy-5030/services/proxy-service-9jvgr:portname2/proxy/: bar (200; 4.956456ms) Jan 13 06:42:51.402: INFO: (10) /api/v1/namespaces/proxy-5030/services/https:proxy-service-9jvgr:tlsportname2/proxy/: tls qux (200; 5.465231ms) Jan 13 06:42:51.402: INFO: (10) /api/v1/namespaces/proxy-5030/pods/http:proxy-service-9jvgr-q2p7f:160/proxy/: foo (200; 5.496765ms) Jan 13 06:42:51.402: INFO: (10) /api/v1/namespaces/proxy-5030/pods/proxy-service-9jvgr-q2p7f:160/proxy/: foo (200; 5.450999ms) Jan 13 06:42:51.403: INFO: (10) /api/v1/namespaces/proxy-5030/pods/http:proxy-service-9jvgr-q2p7f:1080/proxy/: ... (200; 5.443759ms) Jan 13 06:42:51.403: INFO: (10) /api/v1/namespaces/proxy-5030/pods/https:proxy-service-9jvgr-q2p7f:443/proxy/: test (200; 5.747833ms) Jan 13 06:42:51.403: INFO: (10) /api/v1/namespaces/proxy-5030/services/proxy-service-9jvgr:portname1/proxy/: foo (200; 5.993529ms) Jan 13 06:42:51.403: INFO: (10) /api/v1/namespaces/proxy-5030/services/http:proxy-service-9jvgr:portname2/proxy/: bar (200; 5.995672ms) Jan 13 06:42:51.407: INFO: (11) /api/v1/namespaces/proxy-5030/pods/http:proxy-service-9jvgr-q2p7f:160/proxy/: foo (200; 3.137133ms) Jan 13 06:42:51.407: INFO: (11) /api/v1/namespaces/proxy-5030/pods/http:proxy-service-9jvgr-q2p7f:1080/proxy/: ... (200; 3.367618ms) Jan 13 06:42:51.408: INFO: (11) /api/v1/namespaces/proxy-5030/pods/proxy-service-9jvgr-q2p7f:160/proxy/: foo (200; 4.301118ms) Jan 13 06:42:51.408: INFO: (11) /api/v1/namespaces/proxy-5030/pods/proxy-service-9jvgr-q2p7f:162/proxy/: bar (200; 4.215095ms) Jan 13 06:42:51.408: INFO: (11) /api/v1/namespaces/proxy-5030/pods/https:proxy-service-9jvgr-q2p7f:443/proxy/: test<... (200; 4.833425ms) Jan 13 06:42:51.408: INFO: (11) /api/v1/namespaces/proxy-5030/pods/https:proxy-service-9jvgr-q2p7f:462/proxy/: tls qux (200; 5.052663ms) Jan 13 06:42:51.408: INFO: (11) /api/v1/namespaces/proxy-5030/pods/http:proxy-service-9jvgr-q2p7f:162/proxy/: bar (200; 4.429706ms) Jan 13 06:42:51.409: INFO: (11) /api/v1/namespaces/proxy-5030/pods/https:proxy-service-9jvgr-q2p7f:460/proxy/: tls baz (200; 4.958624ms) Jan 13 06:42:51.409: INFO: (11) /api/v1/namespaces/proxy-5030/pods/proxy-service-9jvgr-q2p7f/proxy/: test (200; 4.984453ms) Jan 13 06:42:51.409: INFO: (11) /api/v1/namespaces/proxy-5030/services/http:proxy-service-9jvgr:portname1/proxy/: foo (200; 5.301355ms) Jan 13 06:42:51.409: INFO: (11) /api/v1/namespaces/proxy-5030/services/http:proxy-service-9jvgr:portname2/proxy/: bar (200; 5.68608ms) Jan 13 06:42:51.409: INFO: (11) /api/v1/namespaces/proxy-5030/services/https:proxy-service-9jvgr:tlsportname1/proxy/: tls baz (200; 5.687187ms) Jan 13 06:42:51.409: INFO: (11) /api/v1/namespaces/proxy-5030/services/https:proxy-service-9jvgr:tlsportname2/proxy/: tls qux (200; 5.737414ms) Jan 13 06:42:51.410: INFO: (11) /api/v1/namespaces/proxy-5030/services/proxy-service-9jvgr:portname1/proxy/: foo (200; 5.811428ms) Jan 13 06:42:51.413: INFO: (12) /api/v1/namespaces/proxy-5030/pods/http:proxy-service-9jvgr-q2p7f:1080/proxy/: ... (200; 3.243033ms) Jan 13 06:42:51.413: INFO: (12) /api/v1/namespaces/proxy-5030/pods/proxy-service-9jvgr-q2p7f:160/proxy/: foo (200; 3.411514ms) Jan 13 06:42:51.414: INFO: (12) /api/v1/namespaces/proxy-5030/services/http:proxy-service-9jvgr:portname2/proxy/: bar (200; 3.878871ms) Jan 13 06:42:51.414: INFO: (12) /api/v1/namespaces/proxy-5030/pods/proxy-service-9jvgr-q2p7f:162/proxy/: bar (200; 3.956857ms) Jan 13 06:42:51.414: INFO: (12) /api/v1/namespaces/proxy-5030/pods/https:proxy-service-9jvgr-q2p7f:462/proxy/: tls qux (200; 4.13311ms) Jan 13 06:42:51.414: INFO: (12) /api/v1/namespaces/proxy-5030/pods/proxy-service-9jvgr-q2p7f:1080/proxy/: test<... (200; 4.271318ms) Jan 13 06:42:51.414: INFO: (12) /api/v1/namespaces/proxy-5030/pods/proxy-service-9jvgr-q2p7f/proxy/: test (200; 4.409955ms) Jan 13 06:42:51.414: INFO: (12) /api/v1/namespaces/proxy-5030/pods/http:proxy-service-9jvgr-q2p7f:162/proxy/: bar (200; 4.341986ms) Jan 13 06:42:51.414: INFO: (12) /api/v1/namespaces/proxy-5030/services/http:proxy-service-9jvgr:portname1/proxy/: foo (200; 4.423716ms) Jan 13 06:42:51.415: INFO: (12) /api/v1/namespaces/proxy-5030/pods/https:proxy-service-9jvgr-q2p7f:443/proxy/: test (200; 3.766204ms) Jan 13 06:42:51.420: INFO: (13) /api/v1/namespaces/proxy-5030/pods/https:proxy-service-9jvgr-q2p7f:460/proxy/: tls baz (200; 3.469255ms) Jan 13 06:42:51.420: INFO: (13) /api/v1/namespaces/proxy-5030/pods/proxy-service-9jvgr-q2p7f:1080/proxy/: test<... (200; 3.511171ms) Jan 13 06:42:51.420: INFO: (13) /api/v1/namespaces/proxy-5030/services/proxy-service-9jvgr:portname2/proxy/: bar (200; 4.056753ms) Jan 13 06:42:51.421: INFO: (13) /api/v1/namespaces/proxy-5030/services/http:proxy-service-9jvgr:portname1/proxy/: foo (200; 4.195652ms) Jan 13 06:42:51.421: INFO: (13) /api/v1/namespaces/proxy-5030/pods/https:proxy-service-9jvgr-q2p7f:462/proxy/: tls qux (200; 4.165145ms) Jan 13 06:42:51.421: INFO: (13) /api/v1/namespaces/proxy-5030/pods/proxy-service-9jvgr-q2p7f:160/proxy/: foo (200; 4.132392ms) Jan 13 06:42:51.421: INFO: (13) /api/v1/namespaces/proxy-5030/services/https:proxy-service-9jvgr:tlsportname2/proxy/: tls qux (200; 4.701268ms) Jan 13 06:42:51.421: INFO: (13) /api/v1/namespaces/proxy-5030/pods/http:proxy-service-9jvgr-q2p7f:162/proxy/: bar (200; 5.007927ms) Jan 13 06:42:51.421: INFO: (13) /api/v1/namespaces/proxy-5030/pods/http:proxy-service-9jvgr-q2p7f:1080/proxy/: ... (200; 4.798427ms) Jan 13 06:42:51.421: INFO: (13) /api/v1/namespaces/proxy-5030/services/proxy-service-9jvgr:portname1/proxy/: foo (200; 5.159295ms) Jan 13 06:42:51.422: INFO: (13) /api/v1/namespaces/proxy-5030/pods/http:proxy-service-9jvgr-q2p7f:160/proxy/: foo (200; 4.996166ms) Jan 13 06:42:51.422: INFO: (13) /api/v1/namespaces/proxy-5030/pods/proxy-service-9jvgr-q2p7f:162/proxy/: bar (200; 5.037159ms) Jan 13 06:42:51.422: INFO: (13) /api/v1/namespaces/proxy-5030/services/http:proxy-service-9jvgr:portname2/proxy/: bar (200; 5.766983ms) Jan 13 06:42:51.422: INFO: (13) /api/v1/namespaces/proxy-5030/services/https:proxy-service-9jvgr:tlsportname1/proxy/: tls baz (200; 5.669608ms) Jan 13 06:42:51.426: INFO: (14) /api/v1/namespaces/proxy-5030/pods/proxy-service-9jvgr-q2p7f:160/proxy/: foo (200; 3.320502ms) Jan 13 06:42:51.426: INFO: (14) /api/v1/namespaces/proxy-5030/pods/proxy-service-9jvgr-q2p7f:162/proxy/: bar (200; 3.138672ms) Jan 13 06:42:51.429: INFO: (14) /api/v1/namespaces/proxy-5030/pods/http:proxy-service-9jvgr-q2p7f:162/proxy/: bar (200; 6.116387ms) Jan 13 06:42:51.430: INFO: (14) /api/v1/namespaces/proxy-5030/services/proxy-service-9jvgr:portname1/proxy/: foo (200; 7.832401ms) Jan 13 06:42:51.431: INFO: (14) /api/v1/namespaces/proxy-5030/services/https:proxy-service-9jvgr:tlsportname2/proxy/: tls qux (200; 8.193142ms) Jan 13 06:42:51.431: INFO: (14) /api/v1/namespaces/proxy-5030/services/http:proxy-service-9jvgr:portname2/proxy/: bar (200; 8.469226ms) Jan 13 06:42:51.431: INFO: (14) /api/v1/namespaces/proxy-5030/pods/http:proxy-service-9jvgr-q2p7f:1080/proxy/: ... (200; 8.073327ms) Jan 13 06:42:51.431: INFO: (14) /api/v1/namespaces/proxy-5030/pods/http:proxy-service-9jvgr-q2p7f:160/proxy/: foo (200; 8.397755ms) Jan 13 06:42:51.431: INFO: (14) /api/v1/namespaces/proxy-5030/pods/https:proxy-service-9jvgr-q2p7f:460/proxy/: tls baz (200; 8.719085ms) Jan 13 06:42:51.431: INFO: (14) /api/v1/namespaces/proxy-5030/services/http:proxy-service-9jvgr:portname1/proxy/: foo (200; 8.822136ms) Jan 13 06:42:51.431: INFO: (14) /api/v1/namespaces/proxy-5030/services/proxy-service-9jvgr:portname2/proxy/: bar (200; 8.723008ms) Jan 13 06:42:51.432: INFO: (14) /api/v1/namespaces/proxy-5030/pods/proxy-service-9jvgr-q2p7f/proxy/: test (200; 8.613172ms) Jan 13 06:42:51.432: INFO: (14) /api/v1/namespaces/proxy-5030/pods/https:proxy-service-9jvgr-q2p7f:462/proxy/: tls qux (200; 9.16747ms) Jan 13 06:42:51.432: INFO: (14) /api/v1/namespaces/proxy-5030/pods/proxy-service-9jvgr-q2p7f:1080/proxy/: test<... (200; 9.17433ms) Jan 13 06:42:51.432: INFO: (14) /api/v1/namespaces/proxy-5030/services/https:proxy-service-9jvgr:tlsportname1/proxy/: tls baz (200; 9.169378ms) Jan 13 06:42:51.432: INFO: (14) /api/v1/namespaces/proxy-5030/pods/https:proxy-service-9jvgr-q2p7f:443/proxy/: test (200; 4.949733ms) Jan 13 06:42:51.437: INFO: (15) /api/v1/namespaces/proxy-5030/pods/http:proxy-service-9jvgr-q2p7f:1080/proxy/: ... (200; 5.007462ms) Jan 13 06:42:51.437: INFO: (15) /api/v1/namespaces/proxy-5030/pods/https:proxy-service-9jvgr-q2p7f:460/proxy/: tls baz (200; 5.191768ms) Jan 13 06:42:51.437: INFO: (15) /api/v1/namespaces/proxy-5030/pods/https:proxy-service-9jvgr-q2p7f:462/proxy/: tls qux (200; 5.256721ms) Jan 13 06:42:51.438: INFO: (15) /api/v1/namespaces/proxy-5030/pods/http:proxy-service-9jvgr-q2p7f:162/proxy/: bar (200; 5.265945ms) Jan 13 06:42:51.438: INFO: (15) /api/v1/namespaces/proxy-5030/services/https:proxy-service-9jvgr:tlsportname2/proxy/: tls qux (200; 5.597674ms) Jan 13 06:42:51.438: INFO: (15) /api/v1/namespaces/proxy-5030/pods/proxy-service-9jvgr-q2p7f:160/proxy/: foo (200; 5.512352ms) Jan 13 06:42:51.438: INFO: (15) /api/v1/namespaces/proxy-5030/pods/http:proxy-service-9jvgr-q2p7f:160/proxy/: foo (200; 5.867822ms) Jan 13 06:42:51.439: INFO: (15) /api/v1/namespaces/proxy-5030/services/http:proxy-service-9jvgr:portname1/proxy/: foo (200; 6.452831ms) Jan 13 06:42:51.439: INFO: (15) /api/v1/namespaces/proxy-5030/pods/proxy-service-9jvgr-q2p7f:162/proxy/: bar (200; 6.329663ms) Jan 13 06:42:51.439: INFO: (15) /api/v1/namespaces/proxy-5030/services/proxy-service-9jvgr:portname1/proxy/: foo (200; 6.480946ms) Jan 13 06:42:51.439: INFO: (15) /api/v1/namespaces/proxy-5030/services/http:proxy-service-9jvgr:portname2/proxy/: bar (200; 6.611488ms) Jan 13 06:42:51.439: INFO: (15) /api/v1/namespaces/proxy-5030/services/https:proxy-service-9jvgr:tlsportname1/proxy/: tls baz (200; 6.899908ms) Jan 13 06:42:51.439: INFO: (15) /api/v1/namespaces/proxy-5030/pods/proxy-service-9jvgr-q2p7f:1080/proxy/: test<... (200; 6.793983ms) Jan 13 06:42:51.439: INFO: (15) /api/v1/namespaces/proxy-5030/services/proxy-service-9jvgr:portname2/proxy/: bar (200; 6.93329ms) Jan 13 06:42:51.444: INFO: (16) /api/v1/namespaces/proxy-5030/services/proxy-service-9jvgr:portname2/proxy/: bar (200; 4.367341ms) Jan 13 06:42:51.444: INFO: (16) /api/v1/namespaces/proxy-5030/pods/http:proxy-service-9jvgr-q2p7f:162/proxy/: bar (200; 4.364054ms) Jan 13 06:42:51.444: INFO: (16) /api/v1/namespaces/proxy-5030/pods/https:proxy-service-9jvgr-q2p7f:460/proxy/: tls baz (200; 4.616859ms) Jan 13 06:42:51.444: INFO: (16) /api/v1/namespaces/proxy-5030/pods/proxy-service-9jvgr-q2p7f:1080/proxy/: test<... (200; 4.830354ms) Jan 13 06:42:51.444: INFO: (16) /api/v1/namespaces/proxy-5030/pods/http:proxy-service-9jvgr-q2p7f:1080/proxy/: ... (200; 4.343086ms) Jan 13 06:42:51.445: INFO: (16) /api/v1/namespaces/proxy-5030/services/proxy-service-9jvgr:portname1/proxy/: foo (200; 5.287435ms) Jan 13 06:42:51.446: INFO: (16) /api/v1/namespaces/proxy-5030/pods/proxy-service-9jvgr-q2p7f:160/proxy/: foo (200; 6.069177ms) Jan 13 06:42:51.446: INFO: (16) /api/v1/namespaces/proxy-5030/pods/proxy-service-9jvgr-q2p7f/proxy/: test (200; 6.054484ms) Jan 13 06:42:51.446: INFO: (16) /api/v1/namespaces/proxy-5030/pods/https:proxy-service-9jvgr-q2p7f:462/proxy/: tls qux (200; 6.130558ms) Jan 13 06:42:51.447: INFO: (16) /api/v1/namespaces/proxy-5030/pods/proxy-service-9jvgr-q2p7f:162/proxy/: bar (200; 6.487136ms) Jan 13 06:42:51.447: INFO: (16) /api/v1/namespaces/proxy-5030/services/http:proxy-service-9jvgr:portname1/proxy/: foo (200; 6.59396ms) Jan 13 06:42:51.447: INFO: (16) /api/v1/namespaces/proxy-5030/services/http:proxy-service-9jvgr:portname2/proxy/: bar (200; 6.980904ms) Jan 13 06:42:51.447: INFO: (16) /api/v1/namespaces/proxy-5030/pods/https:proxy-service-9jvgr-q2p7f:443/proxy/: test<... (200; 3.007187ms) Jan 13 06:42:51.452: INFO: (17) /api/v1/namespaces/proxy-5030/services/proxy-service-9jvgr:portname1/proxy/: foo (200; 4.381525ms) Jan 13 06:42:51.452: INFO: (17) /api/v1/namespaces/proxy-5030/pods/proxy-service-9jvgr-q2p7f:160/proxy/: foo (200; 3.763668ms) Jan 13 06:42:51.453: INFO: (17) /api/v1/namespaces/proxy-5030/pods/http:proxy-service-9jvgr-q2p7f:162/proxy/: bar (200; 4.093245ms) Jan 13 06:42:51.453: INFO: (17) /api/v1/namespaces/proxy-5030/services/https:proxy-service-9jvgr:tlsportname1/proxy/: tls baz (200; 5.33842ms) Jan 13 06:42:51.453: INFO: (17) /api/v1/namespaces/proxy-5030/pods/https:proxy-service-9jvgr-q2p7f:460/proxy/: tls baz (200; 4.053663ms) Jan 13 06:42:51.453: INFO: (17) /api/v1/namespaces/proxy-5030/pods/https:proxy-service-9jvgr-q2p7f:462/proxy/: tls qux (200; 4.624625ms) Jan 13 06:42:51.453: INFO: (17) /api/v1/namespaces/proxy-5030/pods/proxy-service-9jvgr-q2p7f:162/proxy/: bar (200; 4.656474ms) Jan 13 06:42:51.453: INFO: (17) /api/v1/namespaces/proxy-5030/pods/http:proxy-service-9jvgr-q2p7f:1080/proxy/: ... (200; 4.654078ms) Jan 13 06:42:51.454: INFO: (17) /api/v1/namespaces/proxy-5030/pods/https:proxy-service-9jvgr-q2p7f:443/proxy/: test (200; 4.759743ms) Jan 13 06:42:51.454: INFO: (17) /api/v1/namespaces/proxy-5030/services/http:proxy-service-9jvgr:portname1/proxy/: foo (200; 5.157603ms) Jan 13 06:42:51.454: INFO: (17) /api/v1/namespaces/proxy-5030/pods/http:proxy-service-9jvgr-q2p7f:160/proxy/: foo (200; 5.082617ms) Jan 13 06:42:51.455: INFO: (17) /api/v1/namespaces/proxy-5030/services/http:proxy-service-9jvgr:portname2/proxy/: bar (200; 6.198188ms) Jan 13 06:42:51.455: INFO: (17) /api/v1/namespaces/proxy-5030/services/proxy-service-9jvgr:portname2/proxy/: bar (200; 6.257668ms) Jan 13 06:42:51.455: INFO: (17) /api/v1/namespaces/proxy-5030/services/https:proxy-service-9jvgr:tlsportname2/proxy/: tls qux (200; 6.389361ms) Jan 13 06:42:51.459: INFO: (18) /api/v1/namespaces/proxy-5030/pods/http:proxy-service-9jvgr-q2p7f:162/proxy/: bar (200; 3.808177ms) Jan 13 06:42:51.459: INFO: (18) /api/v1/namespaces/proxy-5030/pods/proxy-service-9jvgr-q2p7f:160/proxy/: foo (200; 3.892415ms) Jan 13 06:42:51.460: INFO: (18) /api/v1/namespaces/proxy-5030/services/http:proxy-service-9jvgr:portname1/proxy/: foo (200; 4.712747ms) Jan 13 06:42:51.460: INFO: (18) /api/v1/namespaces/proxy-5030/pods/proxy-service-9jvgr-q2p7f:162/proxy/: bar (200; 4.712865ms) Jan 13 06:42:51.461: INFO: (18) /api/v1/namespaces/proxy-5030/pods/https:proxy-service-9jvgr-q2p7f:462/proxy/: tls qux (200; 5.176712ms) Jan 13 06:42:51.461: INFO: (18) /api/v1/namespaces/proxy-5030/pods/http:proxy-service-9jvgr-q2p7f:1080/proxy/: ... (200; 5.306224ms) Jan 13 06:42:51.461: INFO: (18) /api/v1/namespaces/proxy-5030/pods/proxy-service-9jvgr-q2p7f/proxy/: test (200; 5.595805ms) Jan 13 06:42:51.461: INFO: (18) /api/v1/namespaces/proxy-5030/services/proxy-service-9jvgr:portname1/proxy/: foo (200; 5.723491ms) Jan 13 06:42:51.461: INFO: (18) /api/v1/namespaces/proxy-5030/pods/proxy-service-9jvgr-q2p7f:1080/proxy/: test<... (200; 5.664053ms) Jan 13 06:42:51.462: INFO: (18) /api/v1/namespaces/proxy-5030/services/https:proxy-service-9jvgr:tlsportname2/proxy/: tls qux (200; 6.025151ms) Jan 13 06:42:51.462: INFO: (18) /api/v1/namespaces/proxy-5030/services/https:proxy-service-9jvgr:tlsportname1/proxy/: tls baz (200; 5.995523ms) Jan 13 06:42:51.462: INFO: (18) /api/v1/namespaces/proxy-5030/pods/https:proxy-service-9jvgr-q2p7f:460/proxy/: tls baz (200; 6.138283ms) Jan 13 06:42:51.462: INFO: (18) /api/v1/namespaces/proxy-5030/services/proxy-service-9jvgr:portname2/proxy/: bar (200; 6.210751ms) Jan 13 06:42:51.462: INFO: (18) /api/v1/namespaces/proxy-5030/pods/https:proxy-service-9jvgr-q2p7f:443/proxy/: ... (200; 5.120249ms) Jan 13 06:42:51.470: INFO: (19) /api/v1/namespaces/proxy-5030/services/http:proxy-service-9jvgr:portname2/proxy/: bar (200; 5.258828ms) Jan 13 06:42:51.470: INFO: (19) /api/v1/namespaces/proxy-5030/pods/https:proxy-service-9jvgr-q2p7f:460/proxy/: tls baz (200; 5.209665ms) Jan 13 06:42:51.470: INFO: (19) /api/v1/namespaces/proxy-5030/services/http:proxy-service-9jvgr:portname1/proxy/: foo (200; 5.724555ms) Jan 13 06:42:51.471: INFO: (19) /api/v1/namespaces/proxy-5030/services/https:proxy-service-9jvgr:tlsportname1/proxy/: tls baz (200; 6.227625ms) Jan 13 06:42:51.471: INFO: (19) /api/v1/namespaces/proxy-5030/services/https:proxy-service-9jvgr:tlsportname2/proxy/: tls qux (200; 6.729383ms) Jan 13 06:42:51.471: INFO: (19) /api/v1/namespaces/proxy-5030/pods/proxy-service-9jvgr-q2p7f/proxy/: test (200; 6.585021ms) Jan 13 06:42:51.471: INFO: (19) /api/v1/namespaces/proxy-5030/pods/proxy-service-9jvgr-q2p7f:162/proxy/: bar (200; 6.59718ms) Jan 13 06:42:51.472: INFO: (19) /api/v1/namespaces/proxy-5030/pods/http:proxy-service-9jvgr-q2p7f:162/proxy/: bar (200; 6.982584ms) Jan 13 06:42:51.472: INFO: (19) /api/v1/namespaces/proxy-5030/pods/proxy-service-9jvgr-q2p7f:1080/proxy/: test<... (200; 7.626795ms) Jan 13 06:42:51.472: INFO: (19) /api/v1/namespaces/proxy-5030/services/proxy-service-9jvgr:portname2/proxy/: bar (200; 7.294599ms) STEP: deleting ReplicationController proxy-service-9jvgr in namespace proxy-5030, will wait for the garbage collector to delete the pods Jan 13 06:42:51.535: INFO: Deleting ReplicationController proxy-service-9jvgr took: 8.549473ms Jan 13 06:42:52.136: INFO: Terminating ReplicationController proxy-service-9jvgr pods took: 600.687291ms [AfterEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:43:09.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-5030" for this suite. • [SLOW TEST:31.794 seconds] [sig-network] Proxy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59 should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":309,"completed":70,"skipped":1219,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:43:09.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jan 13 06:43:18.075: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 13 06:43:18.082: INFO: Pod pod-with-prestop-exec-hook still exists Jan 13 06:43:20.083: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 13 06:43:20.368: INFO: Pod pod-with-prestop-exec-hook still exists Jan 13 06:43:22.083: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 13 06:43:22.091: INFO: Pod pod-with-prestop-exec-hook still exists Jan 13 06:43:24.083: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 13 06:43:24.091: INFO: Pod pod-with-prestop-exec-hook still exists Jan 13 06:43:26.083: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 13 06:43:26.089: INFO: Pod pod-with-prestop-exec-hook still exists Jan 13 06:43:28.083: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 13 06:43:28.090: INFO: Pod pod-with-prestop-exec-hook still exists Jan 13 06:43:30.083: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 13 06:43:30.091: INFO: Pod pod-with-prestop-exec-hook still exists Jan 13 06:43:32.083: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 13 06:43:32.090: INFO: Pod pod-with-prestop-exec-hook still exists Jan 13 06:43:34.083: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 13 06:43:34.091: INFO: Pod pod-with-prestop-exec-hook still exists Jan 13 06:43:36.083: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 13 06:43:36.091: INFO: Pod pod-with-prestop-exec-hook still exists Jan 13 06:43:38.083: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 13 06:43:38.090: INFO: Pod pod-with-prestop-exec-hook still exists Jan 13 06:43:40.082: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 13 06:43:40.091: INFO: Pod pod-with-prestop-exec-hook still exists Jan 13 06:43:42.083: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 13 06:43:42.090: INFO: Pod pod-with-prestop-exec-hook still exists Jan 13 06:43:44.083: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 13 06:43:44.092: INFO: Pod pod-with-prestop-exec-hook still exists Jan 13 06:43:46.083: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 13 06:43:46.091: INFO: Pod pod-with-prestop-exec-hook still exists Jan 13 06:43:48.083: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 13 06:43:48.091: INFO: Pod pod-with-prestop-exec-hook still exists Jan 13 06:43:50.083: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 13 06:43:50.090: INFO: Pod pod-with-prestop-exec-hook still exists Jan 13 06:43:52.082: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 13 06:43:52.088: INFO: Pod pod-with-prestop-exec-hook still exists Jan 13 06:43:54.083: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 13 06:43:54.091: INFO: Pod pod-with-prestop-exec-hook still exists Jan 13 06:43:56.083: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 13 06:43:56.090: INFO: Pod pod-with-prestop-exec-hook still exists Jan 13 06:43:58.083: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 13 06:43:58.090: INFO: Pod pod-with-prestop-exec-hook still exists Jan 13 06:44:00.083: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 13 06:44:00.091: INFO: Pod pod-with-prestop-exec-hook still exists Jan 13 06:44:02.083: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 13 06:44:02.090: INFO: Pod pod-with-prestop-exec-hook still exists Jan 13 06:44:04.083: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 13 06:44:04.091: INFO: Pod pod-with-prestop-exec-hook still exists Jan 13 06:44:06.083: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 13 06:44:06.090: INFO: Pod pod-with-prestop-exec-hook still exists Jan 13 06:44:08.083: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 13 06:44:08.090: INFO: Pod pod-with-prestop-exec-hook still exists Jan 13 06:44:10.083: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 13 06:44:10.104: INFO: Pod pod-with-prestop-exec-hook still exists Jan 13 06:44:12.083: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 13 06:44:12.092: INFO: Pod pod-with-prestop-exec-hook still exists Jan 13 06:44:14.083: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 13 06:44:14.090: INFO: Pod pod-with-prestop-exec-hook still exists Jan 13 06:44:16.083: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 13 06:44:16.090: INFO: Pod pod-with-prestop-exec-hook still exists Jan 13 06:44:18.083: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 13 06:44:18.090: INFO: Pod pod-with-prestop-exec-hook still exists Jan 13 06:44:20.083: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 13 06:44:20.112: INFO: Pod pod-with-prestop-exec-hook still exists Jan 13 06:44:22.082: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 13 06:44:22.089: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:44:22.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4343" for this suite. • [SLOW TEST:72.270 seconds] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:43 should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":309,"completed":71,"skipped":1257,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:44:22.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating pod pod-subpath-test-downwardapi-v5kl STEP: Creating a pod to test atomic-volume-subpath Jan 13 06:44:23.363: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-v5kl" in namespace "subpath-7059" to be "Succeeded or Failed" Jan 13 06:44:23.384: INFO: Pod "pod-subpath-test-downwardapi-v5kl": Phase="Pending", Reason="", readiness=false. Elapsed: 20.604697ms Jan 13 06:44:25.447: INFO: Pod "pod-subpath-test-downwardapi-v5kl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083775588s Jan 13 06:44:27.620: INFO: Pod "pod-subpath-test-downwardapi-v5kl": Phase="Running", Reason="", readiness=true. Elapsed: 4.256315966s Jan 13 06:44:29.628: INFO: Pod "pod-subpath-test-downwardapi-v5kl": Phase="Running", Reason="", readiness=true. Elapsed: 6.264015746s Jan 13 06:44:31.635: INFO: Pod "pod-subpath-test-downwardapi-v5kl": Phase="Running", Reason="", readiness=true. Elapsed: 8.271973911s Jan 13 06:44:33.644: INFO: Pod "pod-subpath-test-downwardapi-v5kl": Phase="Running", Reason="", readiness=true. Elapsed: 10.280128407s Jan 13 06:44:35.651: INFO: Pod "pod-subpath-test-downwardapi-v5kl": Phase="Running", Reason="", readiness=true. Elapsed: 12.287888468s Jan 13 06:44:37.658: INFO: Pod "pod-subpath-test-downwardapi-v5kl": Phase="Running", Reason="", readiness=true. Elapsed: 14.294654793s Jan 13 06:44:39.666: INFO: Pod "pod-subpath-test-downwardapi-v5kl": Phase="Running", Reason="", readiness=true. Elapsed: 16.302881722s Jan 13 06:44:41.674: INFO: Pod "pod-subpath-test-downwardapi-v5kl": Phase="Running", Reason="", readiness=true. Elapsed: 18.310927002s Jan 13 06:44:43.681: INFO: Pod "pod-subpath-test-downwardapi-v5kl": Phase="Running", Reason="", readiness=true. Elapsed: 20.31773674s Jan 13 06:44:45.689: INFO: Pod "pod-subpath-test-downwardapi-v5kl": Phase="Running", Reason="", readiness=true. Elapsed: 22.325498067s Jan 13 06:44:47.696: INFO: Pod "pod-subpath-test-downwardapi-v5kl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.332336847s STEP: Saw pod success Jan 13 06:44:47.696: INFO: Pod "pod-subpath-test-downwardapi-v5kl" satisfied condition "Succeeded or Failed" Jan 13 06:44:47.702: INFO: Trying to get logs from node leguer-worker pod pod-subpath-test-downwardapi-v5kl container test-container-subpath-downwardapi-v5kl: STEP: delete the pod Jan 13 06:44:47.825: INFO: Waiting for pod pod-subpath-test-downwardapi-v5kl to disappear Jan 13 06:44:47.844: INFO: Pod pod-subpath-test-downwardapi-v5kl no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-v5kl Jan 13 06:44:47.844: INFO: Deleting pod "pod-subpath-test-downwardapi-v5kl" in namespace "subpath-7059" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:44:47.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7059" for this suite. • [SLOW TEST:25.731 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":309,"completed":72,"skipped":1268,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:44:47.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should provide secure master service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:44:48.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9258" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":309,"completed":73,"skipped":1303,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:44:48.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Jan 13 06:44:48.223: INFO: Waiting up to 1m0s for all nodes to be ready Jan 13 06:45:48.317: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:45:48.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:679 [It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 06:45:48.535: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: Value: Forbidden: may not be changed in an update. Jan 13 06:45:48.542: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: Value: Forbidden: may not be changed in an update. [AfterEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:45:48.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-8654" for this suite. [AfterEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:693 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:45:48.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-8228" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:60.752 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:673 verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]","total":309,"completed":74,"skipped":1316,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:45:48.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating secret with name s-test-opt-del-0345d403-a52a-489d-80c1-aadd13f63127 STEP: Creating secret with name s-test-opt-upd-75b5ad6b-e857-4dcc-8726-a4fce6ea2339 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-0345d403-a52a-489d-80c1-aadd13f63127 STEP: Updating secret s-test-opt-upd-75b5ad6b-e857-4dcc-8726-a4fce6ea2339 STEP: Creating secret with name s-test-opt-create-d381f16e-d8e9-4e68-91c9-34cafd04493e STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:45:57.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6113" for this suite. • [SLOW TEST:8.295 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":309,"completed":75,"skipped":1329,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:45:57.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:46:10.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9110" for this suite. • [SLOW TEST:13.332 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":309,"completed":76,"skipped":1339,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:46:10.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 Jan 13 06:46:10.519: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 13 06:46:10.540: INFO: Waiting for terminating namespaces to be deleted... Jan 13 06:46:10.548: INFO: Logging pods the apiserver thinks is on node leguer-worker before test Jan 13 06:46:10.561: INFO: rally-a8f48c6d-3kmika18-pdtzv from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 06:46:10.561: INFO: Container rally-a8f48c6d-3kmika18 ready: true, restart count 0 Jan 13 06:46:10.561: INFO: rally-a8f48c6d-3kmika18-pllzg from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 06:46:10.561: INFO: Container rally-a8f48c6d-3kmika18 ready: true, restart count 0 Jan 13 06:46:10.561: INFO: rally-a8f48c6d-4cyi45kq-j5tzz from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 06:46:10.561: INFO: Container rally-a8f48c6d-4cyi45kq ready: true, restart count 0 Jan 13 06:46:10.561: INFO: rally-a8f48c6d-f3hls6a3-57dwc from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Jan 13 06:46:10.561: INFO: Container rally-a8f48c6d-f3hls6a3 ready: true, restart count 0 Jan 13 06:46:10.561: INFO: rally-a8f48c6d-1y3amfc0-lp8st from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Jan 13 06:46:10.562: INFO: Container rally-a8f48c6d-1y3amfc0 ready: true, restart count 0 Jan 13 06:46:10.562: INFO: rally-a8f48c6d-9pqmjehi-9zwjj from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 06:46:10.562: INFO: Container rally-a8f48c6d-9pqmjehi ready: true, restart count 0 Jan 13 06:46:10.562: INFO: chaos-controller-manager-69c479c674-s796v from default started at 2021-01-10 20:58:24 +0000 UTC (1 container statuses recorded) Jan 13 06:46:10.562: INFO: Container chaos-mesh ready: true, restart count 0 Jan 13 06:46:10.562: INFO: chaos-daemon-lv692 from default started at 2021-01-10 20:58:25 +0000 UTC (1 container statuses recorded) Jan 13 06:46:10.562: INFO: Container chaos-daemon ready: true, restart count 0 Jan 13 06:46:10.562: INFO: kindnet-psm25 from kube-system started at 2021-01-10 17:38:10 +0000 UTC (1 container statuses recorded) Jan 13 06:46:10.562: INFO: Container kindnet-cni ready: true, restart count 0 Jan 13 06:46:10.562: INFO: kube-proxy-bmbcs from kube-system started at 2021-01-10 17:38:10 +0000 UTC (1 container statuses recorded) Jan 13 06:46:10.562: INFO: Container kube-proxy ready: true, restart count 0 Jan 13 06:46:10.562: INFO: Logging pods the apiserver thinks is on node leguer-worker2 before test Jan 13 06:46:10.574: INFO: rally-a8f48c6d-4cyi45kq-knr4r from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 06:46:10.574: INFO: Container rally-a8f48c6d-4cyi45kq ready: true, restart count 0 Jan 13 06:46:10.574: INFO: rally-a8f48c6d-f3hls6a3-dwt8n from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Jan 13 06:46:10.574: INFO: Container rally-a8f48c6d-f3hls6a3 ready: true, restart count 0 Jan 13 06:46:10.574: INFO: rally-a8f48c6d-1y3amfc0-hh9qk from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Jan 13 06:46:10.574: INFO: Container rally-a8f48c6d-1y3amfc0 ready: true, restart count 0 Jan 13 06:46:10.574: INFO: rally-a8f48c6d-9pqmjehi-85slb from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 06:46:10.574: INFO: Container rally-a8f48c6d-9pqmjehi ready: true, restart count 0 Jan 13 06:46:10.574: INFO: rally-a8f48c6d-vnukxqu0-llj24 from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 06:46:10.574: INFO: Container rally-a8f48c6d-vnukxqu0 ready: true, restart count 0 Jan 13 06:46:10.574: INFO: rally-a8f48c6d-vnukxqu0-v85kr from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 06:46:10.575: INFO: Container rally-a8f48c6d-vnukxqu0 ready: true, restart count 0 Jan 13 06:46:10.575: INFO: chaos-daemon-ffkg7 from default started at 2021-01-10 20:58:25 +0000 UTC (1 container statuses recorded) Jan 13 06:46:10.575: INFO: Container chaos-daemon ready: true, restart count 0 Jan 13 06:46:10.575: INFO: kindnet-8wggd from kube-system started at 2021-01-10 17:38:10 +0000 UTC (1 container statuses recorded) Jan 13 06:46:10.575: INFO: Container kindnet-cni ready: true, restart count 0 Jan 13 06:46:10.575: INFO: kube-proxy-29gxg from kube-system started at 2021-01-10 17:38:09 +0000 UTC (1 container statuses recorded) Jan 13 06:46:10.575: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-cde1166e-dc37-4390-a5f5-3f9509a8e7b6 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 172.18.0.12 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-cde1166e-dc37-4390-a5f5-3f9509a8e7b6 off the node leguer-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-cde1166e-dc37-4390-a5f5-3f9509a8e7b6 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:51:18.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9245" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 • [SLOW TEST:308.599 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":309,"completed":77,"skipped":1391,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:51:19.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating service in namespace services-5012 STEP: creating service affinity-nodeport-transition in namespace services-5012 STEP: creating replication controller affinity-nodeport-transition in namespace services-5012 I0113 06:51:19.395590 10 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-5012, replica count: 3 I0113 06:51:22.447010 10 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0113 06:51:25.447769 10 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 13 06:51:25.470: INFO: Creating new exec pod Jan 13 06:51:30.554: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-5012 exec execpod-affinitymfxmw -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80' Jan 13 06:51:35.148: INFO: stderr: "I0113 06:51:35.021693 762 log.go:181] (0x400003ac60) (0x4000ba8140) Create stream\nI0113 06:51:35.025969 762 log.go:181] (0x400003ac60) (0x4000ba8140) Stream added, broadcasting: 1\nI0113 06:51:35.041849 762 log.go:181] (0x400003ac60) Reply frame received for 1\nI0113 06:51:35.042463 762 log.go:181] (0x400003ac60) (0x4000c08000) Create stream\nI0113 06:51:35.042560 762 log.go:181] (0x400003ac60) (0x4000c08000) Stream added, broadcasting: 3\nI0113 06:51:35.044226 762 log.go:181] (0x400003ac60) Reply frame received for 3\nI0113 06:51:35.044481 762 log.go:181] (0x400003ac60) (0x4000ba81e0) Create stream\nI0113 06:51:35.044540 762 log.go:181] (0x400003ac60) (0x4000ba81e0) Stream added, broadcasting: 5\nI0113 06:51:35.045567 762 log.go:181] (0x400003ac60) Reply frame received for 5\nI0113 06:51:35.126727 762 log.go:181] (0x400003ac60) Data frame received for 3\nI0113 06:51:35.127008 762 log.go:181] (0x4000c08000) (3) Data frame handling\nI0113 06:51:35.127214 762 log.go:181] (0x400003ac60) Data frame received for 5\nI0113 06:51:35.127329 762 log.go:181] (0x4000ba81e0) (5) Data frame handling\nI0113 06:51:35.128140 762 log.go:181] (0x400003ac60) Data frame received for 1\nI0113 06:51:35.128256 762 log.go:181] (0x4000ba8140) (1) Data frame handling\nI0113 06:51:35.129560 762 log.go:181] (0x4000ba8140) (1) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-transition 80\nI0113 06:51:35.130573 762 log.go:181] (0x4000ba81e0) (5) Data frame sent\nI0113 06:51:35.130713 762 log.go:181] (0x400003ac60) Data frame received for 5\nI0113 06:51:35.130829 762 log.go:181] (0x4000ba81e0) (5) Data frame handling\nI0113 06:51:35.130976 762 log.go:181] (0x4000ba81e0) (5) Data frame sent\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\nI0113 06:51:35.131105 762 log.go:181] (0x400003ac60) Data frame received for 5\nI0113 06:51:35.131217 762 log.go:181] (0x4000ba81e0) (5) Data frame handling\nI0113 06:51:35.131618 762 log.go:181] (0x400003ac60) (0x4000ba8140) Stream removed, broadcasting: 1\nI0113 06:51:35.135177 762 log.go:181] (0x400003ac60) Go away received\nI0113 06:51:35.138007 762 log.go:181] (0x400003ac60) (0x4000ba8140) Stream removed, broadcasting: 1\nI0113 06:51:35.138362 762 log.go:181] (0x400003ac60) (0x4000c08000) Stream removed, broadcasting: 3\nI0113 06:51:35.138612 762 log.go:181] (0x400003ac60) (0x4000ba81e0) Stream removed, broadcasting: 5\n" Jan 13 06:51:35.149: INFO: stdout: "" Jan 13 06:51:35.155: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-5012 exec execpod-affinitymfxmw -- /bin/sh -x -c nc -zv -t -w 2 10.96.238.91 80' Jan 13 06:51:36.900: INFO: stderr: "I0113 06:51:36.770790 783 log.go:181] (0x4000da8c60) (0x40009fa1e0) Create stream\nI0113 06:51:36.773253 783 log.go:181] (0x4000da8c60) (0x40009fa1e0) Stream added, broadcasting: 1\nI0113 06:51:36.784260 783 log.go:181] (0x4000da8c60) Reply frame received for 1\nI0113 06:51:36.785313 783 log.go:181] (0x4000da8c60) (0x4000aff7c0) Create stream\nI0113 06:51:36.785419 783 log.go:181] (0x4000da8c60) (0x4000aff7c0) Stream added, broadcasting: 3\nI0113 06:51:36.786846 783 log.go:181] (0x4000da8c60) Reply frame received for 3\nI0113 06:51:36.787081 783 log.go:181] (0x4000da8c60) (0x4000affa40) Create stream\nI0113 06:51:36.787135 783 log.go:181] (0x4000da8c60) (0x4000affa40) Stream added, broadcasting: 5\nI0113 06:51:36.788295 783 log.go:181] (0x4000da8c60) Reply frame received for 5\nI0113 06:51:36.878777 783 log.go:181] (0x4000da8c60) Data frame received for 3\nI0113 06:51:36.879206 783 log.go:181] (0x4000da8c60) Data frame received for 5\nI0113 06:51:36.879393 783 log.go:181] (0x4000aff7c0) (3) Data frame handling\nI0113 06:51:36.879597 783 log.go:181] (0x4000affa40) (5) Data frame handling\nI0113 06:51:36.880503 783 log.go:181] (0x4000da8c60) Data frame received for 1\nI0113 06:51:36.880609 783 log.go:181] (0x40009fa1e0) (1) Data frame handling\n+ nc -zv -t -w 2 10.96.238.91 80\nConnection to 10.96.238.91 80 port [tcp/http] succeeded!\nI0113 06:51:36.883087 783 log.go:181] (0x40009fa1e0) (1) Data frame sent\nI0113 06:51:36.883772 783 log.go:181] (0x4000affa40) (5) Data frame sent\nI0113 06:51:36.883968 783 log.go:181] (0x4000da8c60) Data frame received for 5\nI0113 06:51:36.884148 783 log.go:181] (0x4000affa40) (5) Data frame handling\nI0113 06:51:36.885250 783 log.go:181] (0x4000da8c60) (0x40009fa1e0) Stream removed, broadcasting: 1\nI0113 06:51:36.888490 783 log.go:181] (0x4000da8c60) Go away received\nI0113 06:51:36.891652 783 log.go:181] (0x4000da8c60) (0x40009fa1e0) Stream removed, broadcasting: 1\nI0113 06:51:36.891973 783 log.go:181] (0x4000da8c60) (0x4000aff7c0) Stream removed, broadcasting: 3\nI0113 06:51:36.892197 783 log.go:181] (0x4000da8c60) (0x4000affa40) Stream removed, broadcasting: 5\n" Jan 13 06:51:36.901: INFO: stdout: "" Jan 13 06:51:36.902: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-5012 exec execpod-affinitymfxmw -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.13 30424' Jan 13 06:51:38.430: INFO: stderr: "I0113 06:51:38.322698 803 log.go:181] (0x40000f4370) (0x400028d7c0) Create stream\nI0113 06:51:38.326328 803 log.go:181] (0x40000f4370) (0x400028d7c0) Stream added, broadcasting: 1\nI0113 06:51:38.338652 803 log.go:181] (0x40000f4370) Reply frame received for 1\nI0113 06:51:38.339526 803 log.go:181] (0x40000f4370) (0x4000c8a1e0) Create stream\nI0113 06:51:38.339618 803 log.go:181] (0x40000f4370) (0x4000c8a1e0) Stream added, broadcasting: 3\nI0113 06:51:38.341085 803 log.go:181] (0x40000f4370) Reply frame received for 3\nI0113 06:51:38.341319 803 log.go:181] (0x40000f4370) (0x4000a2e280) Create stream\nI0113 06:51:38.341381 803 log.go:181] (0x40000f4370) (0x4000a2e280) Stream added, broadcasting: 5\nI0113 06:51:38.342718 803 log.go:181] (0x40000f4370) Reply frame received for 5\nI0113 06:51:38.406883 803 log.go:181] (0x40000f4370) Data frame received for 5\nI0113 06:51:38.407355 803 log.go:181] (0x40000f4370) Data frame received for 3\nI0113 06:51:38.407517 803 log.go:181] (0x4000a2e280) (5) Data frame handling\nI0113 06:51:38.407768 803 log.go:181] (0x4000c8a1e0) (3) Data frame handling\nI0113 06:51:38.408023 803 log.go:181] (0x40000f4370) Data frame received for 1\nI0113 06:51:38.408125 803 log.go:181] (0x400028d7c0) (1) Data frame handling\nI0113 06:51:38.409819 803 log.go:181] (0x400028d7c0) (1) Data frame sent\nI0113 06:51:38.410734 803 log.go:181] (0x4000a2e280) (5) Data frame sent\nI0113 06:51:38.410884 803 log.go:181] (0x40000f4370) Data frame received for 5\n+ nc -zv -t -w 2 172.18.0.13 30424\nConnection to 172.18.0.13 30424 port [tcp/30424] succeeded!\nI0113 06:51:38.411001 803 log.go:181] (0x4000a2e280) (5) Data frame handling\nI0113 06:51:38.412395 803 log.go:181] (0x40000f4370) (0x400028d7c0) Stream removed, broadcasting: 1\nI0113 06:51:38.415743 803 log.go:181] (0x40000f4370) Go away received\nI0113 06:51:38.420043 803 log.go:181] (0x40000f4370) (0x400028d7c0) Stream removed, broadcasting: 1\nI0113 06:51:38.420531 803 log.go:181] (0x40000f4370) (0x4000c8a1e0) Stream removed, broadcasting: 3\nI0113 06:51:38.420808 803 log.go:181] (0x40000f4370) (0x4000a2e280) Stream removed, broadcasting: 5\n" Jan 13 06:51:38.431: INFO: stdout: "" Jan 13 06:51:38.432: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-5012 exec execpod-affinitymfxmw -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.12 30424' Jan 13 06:51:39.946: INFO: stderr: "I0113 06:51:39.815424 823 log.go:181] (0x400003a420) (0x4000626000) Create stream\nI0113 06:51:39.820263 823 log.go:181] (0x400003a420) (0x4000626000) Stream added, broadcasting: 1\nI0113 06:51:39.834057 823 log.go:181] (0x400003a420) Reply frame received for 1\nI0113 06:51:39.834886 823 log.go:181] (0x400003a420) (0x4000225cc0) Create stream\nI0113 06:51:39.834994 823 log.go:181] (0x400003a420) (0x4000225cc0) Stream added, broadcasting: 3\nI0113 06:51:39.836600 823 log.go:181] (0x400003a420) Reply frame received for 3\nI0113 06:51:39.836949 823 log.go:181] (0x400003a420) (0x40003a0640) Create stream\nI0113 06:51:39.837031 823 log.go:181] (0x400003a420) (0x40003a0640) Stream added, broadcasting: 5\nI0113 06:51:39.838265 823 log.go:181] (0x400003a420) Reply frame received for 5\nI0113 06:51:39.929201 823 log.go:181] (0x400003a420) Data frame received for 5\nI0113 06:51:39.929451 823 log.go:181] (0x400003a420) Data frame received for 1\nI0113 06:51:39.929609 823 log.go:181] (0x40003a0640) (5) Data frame handling\nI0113 06:51:39.929726 823 log.go:181] (0x4000626000) (1) Data frame handling\nI0113 06:51:39.929913 823 log.go:181] (0x400003a420) Data frame received for 3\nI0113 06:51:39.930016 823 log.go:181] (0x4000225cc0) (3) Data frame handling\nI0113 06:51:39.931837 823 log.go:181] (0x4000626000) (1) Data frame sent\n+ nc -zv -t -w 2 172.18.0.12 30424\nConnection to 172.18.0.12 30424 port [tcp/30424] succeeded!\nI0113 06:51:39.932385 823 log.go:181] (0x40003a0640) (5) Data frame sent\nI0113 06:51:39.932580 823 log.go:181] (0x400003a420) Data frame received for 5\nI0113 06:51:39.932687 823 log.go:181] (0x40003a0640) (5) Data frame handling\nI0113 06:51:39.933804 823 log.go:181] (0x400003a420) (0x4000626000) Stream removed, broadcasting: 1\nI0113 06:51:39.936110 823 log.go:181] (0x400003a420) Go away received\nI0113 06:51:39.939229 823 log.go:181] (0x400003a420) (0x4000626000) Stream removed, broadcasting: 1\nI0113 06:51:39.939538 823 log.go:181] (0x400003a420) (0x4000225cc0) Stream removed, broadcasting: 3\nI0113 06:51:39.939776 823 log.go:181] (0x400003a420) (0x40003a0640) Stream removed, broadcasting: 5\n" Jan 13 06:51:39.947: INFO: stdout: "" Jan 13 06:51:39.961: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-5012 exec execpod-affinitymfxmw -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.13:30424/ ; done' Jan 13 06:51:41.620: INFO: stderr: "I0113 06:51:41.395776 844 log.go:181] (0x4000c74000) (0x4000a5a1e0) Create stream\nI0113 06:51:41.401659 844 log.go:181] (0x4000c74000) (0x4000a5a1e0) Stream added, broadcasting: 1\nI0113 06:51:41.412187 844 log.go:181] (0x4000c74000) Reply frame received for 1\nI0113 06:51:41.412691 844 log.go:181] (0x4000c74000) (0x4000a5a280) Create stream\nI0113 06:51:41.412743 844 log.go:181] (0x4000c74000) (0x4000a5a280) Stream added, broadcasting: 3\nI0113 06:51:41.414447 844 log.go:181] (0x4000c74000) Reply frame received for 3\nI0113 06:51:41.414946 844 log.go:181] (0x4000c74000) (0x400063c000) Create stream\nI0113 06:51:41.415054 844 log.go:181] (0x4000c74000) (0x400063c000) Stream added, broadcasting: 5\nI0113 06:51:41.416727 844 log.go:181] (0x4000c74000) Reply frame received for 5\nI0113 06:51:41.497018 844 log.go:181] (0x4000c74000) Data frame received for 3\nI0113 06:51:41.497649 844 log.go:181] (0x4000c74000) Data frame received for 5\nI0113 06:51:41.497826 844 log.go:181] (0x400063c000) (5) Data frame handling\nI0113 06:51:41.497994 844 log.go:181] (0x4000a5a280) (3) Data frame handling\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30424/\nI0113 06:51:41.499756 844 log.go:181] (0x4000a5a280) (3) Data frame sent\nI0113 06:51:41.499853 844 log.go:181] (0x400063c000) (5) Data frame sent\nI0113 06:51:41.500661 844 log.go:181] (0x4000c74000) Data frame received for 3\nI0113 06:51:41.500796 844 log.go:181] (0x4000a5a280) (3) Data frame handling\nI0113 06:51:41.501062 844 log.go:181] (0x4000a5a280) (3) Data frame sent\nI0113 06:51:41.501221 844 log.go:181] (0x4000c74000) Data frame received for 5\nI0113 06:51:41.501337 844 log.go:181] (0x400063c000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30424/\nI0113 06:51:41.501431 844 log.go:181] (0x4000c74000) Data frame received for 3\nI0113 06:51:41.501599 844 log.go:181] (0x4000a5a280) (3) Data frame handling\nI0113 06:51:41.501693 844 log.go:181] (0x400063c000) (5) Data frame sent\nI0113 06:51:41.501791 844 log.go:181] (0x4000a5a280) (3) Data frame sent\nI0113 06:51:41.507712 844 log.go:181] (0x4000c74000) Data frame received for 3\nI0113 06:51:41.507848 844 log.go:181] (0x4000a5a280) (3) Data frame handling\nI0113 06:51:41.507980 844 log.go:181] (0x4000a5a280) (3) Data frame sent\nI0113 06:51:41.508433 844 log.go:181] (0x4000c74000) Data frame received for 3\nI0113 06:51:41.508566 844 log.go:181] (0x4000c74000) Data frame received for 5\nI0113 06:51:41.508680 844 log.go:181] (0x400063c000) (5) Data frame handling\nI0113 06:51:41.508772 844 log.go:181] (0x400063c000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30424/\nI0113 06:51:41.508935 844 log.go:181] (0x4000a5a280) (3) Data frame handling\nI0113 06:51:41.509273 844 log.go:181] (0x4000a5a280) (3) Data frame sent\nI0113 06:51:41.514800 844 log.go:181] (0x4000c74000) Data frame received for 3\nI0113 06:51:41.514913 844 log.go:181] (0x4000a5a280) (3) Data frame handling\nI0113 06:51:41.515043 844 log.go:181] (0x4000a5a280) (3) Data frame sent\nI0113 06:51:41.515551 844 log.go:181] (0x4000c74000) Data frame received for 5\nI0113 06:51:41.515670 844 log.go:181] (0x4000c74000) Data frame received for 3\nI0113 06:51:41.515800 844 log.go:181] (0x4000a5a280) (3) Data frame handling\nI0113 06:51:41.515982 844 log.go:181] (0x400063c000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30424/\nI0113 06:51:41.516208 844 log.go:181] (0x4000a5a280) (3) Data frame sent\nI0113 06:51:41.516369 844 log.go:181] (0x400063c000) (5) Data frame sent\nI0113 06:51:41.522985 844 log.go:181] (0x4000c74000) Data frame received for 3\nI0113 06:51:41.523138 844 log.go:181] (0x4000a5a280) (3) Data frame handling\nI0113 06:51:41.523389 844 log.go:181] (0x4000a5a280) (3) Data frame sent\nI0113 06:51:41.523671 844 log.go:181] (0x4000c74000) Data frame received for 5\nI0113 06:51:41.523835 844 log.go:181] (0x4000c74000) Data frame received for 3\nI0113 06:51:41.524023 844 log.go:181] (0x4000a5a280) (3) Data frame handling\nI0113 06:51:41.524179 844 log.go:181] (0x400063c000) (5) Data frame handling\nI0113 06:51:41.524316 844 log.go:181] (0x400063c000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30424/\nI0113 06:51:41.524418 844 log.go:181] (0x4000a5a280) (3) Data frame sent\nI0113 06:51:41.530294 844 log.go:181] (0x4000c74000) Data frame received for 3\nI0113 06:51:41.530429 844 log.go:181] (0x4000a5a280) (3) Data frame handling\nI0113 06:51:41.530563 844 log.go:181] (0x4000a5a280) (3) Data frame sent\nI0113 06:51:41.531359 844 log.go:181] (0x4000c74000) Data frame received for 5\nI0113 06:51:41.531548 844 log.go:181] (0x400063c000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30424/\nI0113 06:51:41.531706 844 log.go:181] (0x4000c74000) Data frame received for 3\nI0113 06:51:41.531882 844 log.go:181] (0x4000a5a280) (3) Data frame handling\nI0113 06:51:41.531986 844 log.go:181] (0x400063c000) (5) Data frame sent\nI0113 06:51:41.532124 844 log.go:181] (0x4000a5a280) (3) Data frame sent\nI0113 06:51:41.536193 844 log.go:181] (0x4000c74000) Data frame received for 3\nI0113 06:51:41.536311 844 log.go:181] (0x4000a5a280) (3) Data frame handling\nI0113 06:51:41.536475 844 log.go:181] (0x4000a5a280) (3) Data frame sent\nI0113 06:51:41.536780 844 log.go:181] (0x4000c74000) Data frame received for 5\nI0113 06:51:41.537049 844 log.go:181] (0x400063c000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30424/\nI0113 06:51:41.537278 844 log.go:181] (0x4000c74000) Data frame received for 3\nI0113 06:51:41.537458 844 log.go:181] (0x4000a5a280) (3) Data frame handling\nI0113 06:51:41.537570 844 log.go:181] (0x400063c000) (5) Data frame sent\nI0113 06:51:41.537724 844 log.go:181] (0x4000a5a280) (3) Data frame sent\nI0113 06:51:41.543887 844 log.go:181] (0x4000c74000) Data frame received for 3\nI0113 06:51:41.544036 844 log.go:181] (0x4000a5a280) (3) Data frame handling\nI0113 06:51:41.544166 844 log.go:181] (0x4000a5a280) (3) Data frame sent\nI0113 06:51:41.544559 844 log.go:181] (0x4000c74000) Data frame received for 5\nI0113 06:51:41.544684 844 log.go:181] (0x400063c000) (5) Data frame handling\nI0113 06:51:41.544806 844 log.go:181] (0x400063c000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30424/\nI0113 06:51:41.545034 844 log.go:181] (0x4000c74000) Data frame received for 3\nI0113 06:51:41.545160 844 log.go:181] (0x4000a5a280) (3) Data frame handling\nI0113 06:51:41.545325 844 log.go:181] (0x4000a5a280) (3) Data frame sent\nI0113 06:51:41.550048 844 log.go:181] (0x4000c74000) Data frame received for 3\nI0113 06:51:41.550132 844 log.go:181] (0x4000a5a280) (3) Data frame handling\nI0113 06:51:41.550244 844 log.go:181] (0x4000a5a280) (3) Data frame sent\nI0113 06:51:41.550674 844 log.go:181] (0x4000c74000) Data frame received for 5\nI0113 06:51:41.550794 844 log.go:181] (0x400063c000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30424/\nI0113 06:51:41.550907 844 log.go:181] (0x4000c74000) Data frame received for 3\nI0113 06:51:41.551016 844 log.go:181] (0x4000a5a280) (3) Data frame handling\nI0113 06:51:41.551117 844 log.go:181] (0x400063c000) (5) Data frame sent\nI0113 06:51:41.551206 844 log.go:181] (0x4000a5a280) (3) Data frame sent\nI0113 06:51:41.558856 844 log.go:181] (0x4000c74000) Data frame received for 3\nI0113 06:51:41.558927 844 log.go:181] (0x4000a5a280) (3) Data frame handling\nI0113 06:51:41.559006 844 log.go:181] (0x4000a5a280) (3) Data frame sent\nI0113 06:51:41.559520 844 log.go:181] (0x4000c74000) Data frame received for 5\nI0113 06:51:41.559649 844 log.go:181] (0x400063c000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30424/\nI0113 06:51:41.559758 844 log.go:181] (0x4000c74000) Data frame received for 3\nI0113 06:51:41.559902 844 log.go:181] (0x4000a5a280) (3) Data frame handling\nI0113 06:51:41.560028 844 log.go:181] (0x400063c000) (5) Data frame sent\nI0113 06:51:41.560141 844 log.go:181] (0x4000a5a280) (3) Data frame sent\nI0113 06:51:41.563641 844 log.go:181] (0x4000c74000) Data frame received for 3\nI0113 06:51:41.563755 844 log.go:181] (0x4000a5a280) (3) Data frame handling\nI0113 06:51:41.563970 844 log.go:181] (0x4000a5a280) (3) Data frame sent\nI0113 06:51:41.564249 844 log.go:181] (0x4000c74000) Data frame received for 5\nI0113 06:51:41.564375 844 log.go:181] (0x400063c000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeoutI0113 06:51:41.564468 844 log.go:181] (0x4000c74000) Data frame received for 3\nI0113 06:51:41.564574 844 log.go:181] (0x4000a5a280) (3) Data frame handling\nI0113 06:51:41.564694 844 log.go:181] (0x400063c000) (5) Data frame sent\nI0113 06:51:41.564974 844 log.go:181] (0x4000c74000) Data frame received for 5\nI0113 06:51:41.565086 844 log.go:181] (0x400063c000) (5) Data frame handling\nI0113 06:51:41.565202 844 log.go:181] (0x400063c000) (5) Data frame sent\n 2 http://172.18.0.13:30424/\nI0113 06:51:41.565301 844 log.go:181] (0x4000a5a280) (3) Data frame sent\nI0113 06:51:41.568504 844 log.go:181] (0x4000c74000) Data frame received for 3\nI0113 06:51:41.568684 844 log.go:181] (0x4000a5a280) (3) Data frame handling\nI0113 06:51:41.568986 844 log.go:181] (0x4000a5a280) (3) Data frame sent\nI0113 06:51:41.569156 844 log.go:181] (0x4000c74000) Data frame received for 3\nI0113 06:51:41.569289 844 log.go:181] (0x4000a5a280) (3) Data frame handling\nI0113 06:51:41.569413 844 log.go:181] (0x4000c74000) Data frame received for 5\nI0113 06:51:41.569557 844 log.go:181] (0x400063c000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30424/\nI0113 06:51:41.569710 844 log.go:181] (0x4000a5a280) (3) Data frame sent\nI0113 06:51:41.569833 844 log.go:181] (0x400063c000) (5) Data frame sent\nI0113 06:51:41.573475 844 log.go:181] (0x4000c74000) Data frame received for 3\nI0113 06:51:41.573629 844 log.go:181] (0x4000a5a280) (3) Data frame handling\nI0113 06:51:41.573763 844 log.go:181] (0x4000a5a280) (3) Data frame sent\nI0113 06:51:41.573926 844 log.go:181] (0x4000c74000) Data frame received for 5\nI0113 06:51:41.574086 844 log.go:181] (0x400063c000) (5) Data frame handling\nI0113 06:51:41.574212 844 log.go:181] (0x400063c000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30424/\nI0113 06:51:41.577681 844 log.go:181] (0x4000c74000) Data frame received for 3\nI0113 06:51:41.577826 844 log.go:181] (0x4000a5a280) (3) Data frame handling\nI0113 06:51:41.577985 844 log.go:181] (0x4000a5a280) (3) Data frame sent\nI0113 06:51:41.578432 844 log.go:181] (0x4000c74000) Data frame received for 3\nI0113 06:51:41.578574 844 log.go:181] (0x4000a5a280) (3) Data frame handling\nI0113 06:51:41.578708 844 log.go:181] (0x4000c74000) Data frame received for 5\nI0113 06:51:41.578818 844 log.go:181] (0x400063c000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2I0113 06:51:41.578918 844 log.go:181] (0x4000a5a280) (3) Data frame sent\nI0113 06:51:41.579055 844 log.go:181] (0x4000c74000) Data frame received for 3\nI0113 06:51:41.579156 844 log.go:181] (0x400063c000) (5) Data frame sent\nI0113 06:51:41.579314 844 log.go:181] (0x4000c74000) Data frame received for 5\nI0113 06:51:41.579434 844 log.go:181] (0x400063c000) (5) Data frame handling\n http://172.18.0.13:30424/\nI0113 06:51:41.579570 844 log.go:181] (0x4000a5a280) (3) Data frame handling\nI0113 06:51:41.579699 844 log.go:181] (0x400063c000) (5) Data frame sent\nI0113 06:51:41.579822 844 log.go:181] (0x4000a5a280) (3) Data frame sent\nI0113 06:51:41.584285 844 log.go:181] (0x4000c74000) Data frame received for 3\nI0113 06:51:41.584417 844 log.go:181] (0x4000a5a280) (3) Data frame handling\nI0113 06:51:41.584567 844 log.go:181] (0x4000a5a280) (3) Data frame sent\nI0113 06:51:41.585361 844 log.go:181] (0x4000c74000) Data frame received for 3\nI0113 06:51:41.585460 844 log.go:181] (0x4000a5a280) (3) Data frame handling\nI0113 06:51:41.585526 844 log.go:181] (0x4000a5a280) (3) Data frame sent\nI0113 06:51:41.585588 844 log.go:181] (0x4000c74000) Data frame received for 5\nI0113 06:51:41.585640 844 log.go:181] (0x400063c000) (5) Data frame handling\nI0113 06:51:41.585704 844 log.go:181] (0x400063c000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30424/\nI0113 06:51:41.590722 844 log.go:181] (0x4000c74000) Data frame received for 3\nI0113 06:51:41.590817 844 log.go:181] (0x4000a5a280) (3) Data frame handling\nI0113 06:51:41.590918 844 log.go:181] (0x4000a5a280) (3) Data frame sent\nI0113 06:51:41.591503 844 log.go:181] (0x4000c74000) Data frame received for 3\nI0113 06:51:41.591587 844 log.go:181] (0x4000a5a280) (3) Data frame handling\nI0113 06:51:41.591649 844 log.go:181] (0x4000a5a280) (3) Data frame sent\nI0113 06:51:41.591712 844 log.go:181] (0x4000c74000) Data frame received for 5\nI0113 06:51:41.591766 844 log.go:181] (0x400063c000) (5) Data frame handling\nI0113 06:51:41.591831 844 log.go:181] (0x400063c000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30424/\nI0113 06:51:41.598355 844 log.go:181] (0x4000c74000) Data frame received for 3\nI0113 06:51:41.598481 844 log.go:181] (0x4000a5a280) (3) Data frame handling\nI0113 06:51:41.598627 844 log.go:181] (0x4000a5a280) (3) Data frame sent\nI0113 06:51:41.599412 844 log.go:181] (0x4000c74000) Data frame received for 5\nI0113 06:51:41.599496 844 log.go:181] (0x400063c000) (5) Data frame handling\nI0113 06:51:41.599892 844 log.go:181] (0x4000c74000) Data frame received for 3\nI0113 06:51:41.600007 844 log.go:181] (0x4000a5a280) (3) Data frame handling\nI0113 06:51:41.601938 844 log.go:181] (0x4000c74000) Data frame received for 1\nI0113 06:51:41.602027 844 log.go:181] (0x4000a5a1e0) (1) Data frame handling\nI0113 06:51:41.602105 844 log.go:181] (0x4000a5a1e0) (1) Data frame sent\nI0113 06:51:41.603087 844 log.go:181] (0x4000c74000) (0x4000a5a1e0) Stream removed, broadcasting: 1\nI0113 06:51:41.605711 844 log.go:181] (0x4000c74000) Go away received\nI0113 06:51:41.608918 844 log.go:181] (0x4000c74000) (0x4000a5a1e0) Stream removed, broadcasting: 1\nI0113 06:51:41.609482 844 log.go:181] (0x4000c74000) (0x4000a5a280) Stream removed, broadcasting: 3\nI0113 06:51:41.609715 844 log.go:181] (0x4000c74000) (0x400063c000) Stream removed, broadcasting: 5\n" Jan 13 06:51:41.625: INFO: stdout: "\naffinity-nodeport-transition-khgpj\naffinity-nodeport-transition-s5mnb\naffinity-nodeport-transition-s5mnb\naffinity-nodeport-transition-s5mnb\naffinity-nodeport-transition-s5mnb\naffinity-nodeport-transition-khgpj\naffinity-nodeport-transition-khgpj\naffinity-nodeport-transition-khgpj\naffinity-nodeport-transition-khgpj\naffinity-nodeport-transition-26r4x\naffinity-nodeport-transition-26r4x\naffinity-nodeport-transition-26r4x\naffinity-nodeport-transition-26r4x\naffinity-nodeport-transition-s5mnb\naffinity-nodeport-transition-26r4x\naffinity-nodeport-transition-s5mnb" Jan 13 06:51:41.626: INFO: Received response from host: affinity-nodeport-transition-khgpj Jan 13 06:51:41.626: INFO: Received response from host: affinity-nodeport-transition-s5mnb Jan 13 06:51:41.626: INFO: Received response from host: affinity-nodeport-transition-s5mnb Jan 13 06:51:41.626: INFO: Received response from host: affinity-nodeport-transition-s5mnb Jan 13 06:51:41.626: INFO: Received response from host: affinity-nodeport-transition-s5mnb Jan 13 06:51:41.626: INFO: Received response from host: affinity-nodeport-transition-khgpj Jan 13 06:51:41.626: INFO: Received response from host: affinity-nodeport-transition-khgpj Jan 13 06:51:41.626: INFO: Received response from host: affinity-nodeport-transition-khgpj Jan 13 06:51:41.626: INFO: Received response from host: affinity-nodeport-transition-khgpj Jan 13 06:51:41.626: INFO: Received response from host: affinity-nodeport-transition-26r4x Jan 13 06:51:41.626: INFO: Received response from host: affinity-nodeport-transition-26r4x Jan 13 06:51:41.626: INFO: Received response from host: affinity-nodeport-transition-26r4x Jan 13 06:51:41.626: INFO: Received response from host: affinity-nodeport-transition-26r4x Jan 13 06:51:41.626: INFO: Received response from host: affinity-nodeport-transition-s5mnb Jan 13 06:51:41.626: INFO: Received response from host: affinity-nodeport-transition-26r4x Jan 13 06:51:41.626: INFO: Received response from host: affinity-nodeport-transition-s5mnb Jan 13 06:51:41.639: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-5012 exec execpod-affinitymfxmw -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.13:30424/ ; done' Jan 13 06:51:43.308: INFO: stderr: "I0113 06:51:43.110166 864 log.go:181] (0x4000294000) (0x4000917180) Create stream\nI0113 06:51:43.112809 864 log.go:181] (0x4000294000) (0x4000917180) Stream added, broadcasting: 1\nI0113 06:51:43.126168 864 log.go:181] (0x4000294000) Reply frame received for 1\nI0113 06:51:43.127397 864 log.go:181] (0x4000294000) (0x4000d18000) Create stream\nI0113 06:51:43.127509 864 log.go:181] (0x4000294000) (0x4000d18000) Stream added, broadcasting: 3\nI0113 06:51:43.129478 864 log.go:181] (0x4000294000) Reply frame received for 3\nI0113 06:51:43.129962 864 log.go:181] (0x4000294000) (0x40008921e0) Create stream\nI0113 06:51:43.130064 864 log.go:181] (0x4000294000) (0x40008921e0) Stream added, broadcasting: 5\nI0113 06:51:43.131994 864 log.go:181] (0x4000294000) Reply frame received for 5\nI0113 06:51:43.197311 864 log.go:181] (0x4000294000) Data frame received for 5\nI0113 06:51:43.197805 864 log.go:181] (0x4000294000) Data frame received for 3\nI0113 06:51:43.197979 864 log.go:181] (0x4000d18000) (3) Data frame handling\nI0113 06:51:43.198219 864 log.go:181] (0x40008921e0) (5) Data frame handling\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30424/\nI0113 06:51:43.200016 864 log.go:181] (0x40008921e0) (5) Data frame sent\nI0113 06:51:43.200721 864 log.go:181] (0x4000d18000) (3) Data frame sent\nI0113 06:51:43.201012 864 log.go:181] (0x4000294000) Data frame received for 3\nI0113 06:51:43.201184 864 log.go:181] (0x4000d18000) (3) Data frame handling\nI0113 06:51:43.201296 864 log.go:181] (0x4000294000) Data frame received for 5\nI0113 06:51:43.201416 864 log.go:181] (0x40008921e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30424/\nI0113 06:51:43.201587 864 log.go:181] (0x4000d18000) (3) Data frame sent\nI0113 06:51:43.201778 864 log.go:181] (0x4000294000) Data frame received for 3\nI0113 06:51:43.201913 864 log.go:181] (0x4000d18000) (3) Data frame handling\nI0113 06:51:43.202062 864 log.go:181] (0x40008921e0) (5) Data frame sent\nI0113 06:51:43.202242 864 log.go:181] (0x4000d18000) (3) Data frame sent\nI0113 06:51:43.208155 864 log.go:181] (0x4000294000) Data frame received for 3\nI0113 06:51:43.208275 864 log.go:181] (0x4000d18000) (3) Data frame handling\nI0113 06:51:43.208428 864 log.go:181] (0x4000d18000) (3) Data frame sent\nI0113 06:51:43.208812 864 log.go:181] (0x4000294000) Data frame received for 5\nI0113 06:51:43.209037 864 log.go:181] (0x40008921e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30424/\nI0113 06:51:43.209208 864 log.go:181] (0x4000294000) Data frame received for 3\nI0113 06:51:43.209394 864 log.go:181] (0x4000d18000) (3) Data frame handling\nI0113 06:51:43.209542 864 log.go:181] (0x40008921e0) (5) Data frame sent\nI0113 06:51:43.209667 864 log.go:181] (0x4000d18000) (3) Data frame sent\nI0113 06:51:43.214201 864 log.go:181] (0x4000294000) Data frame received for 3\nI0113 06:51:43.214299 864 log.go:181] (0x4000d18000) (3) Data frame handling\nI0113 06:51:43.214423 864 log.go:181] (0x4000d18000) (3) Data frame sent\nI0113 06:51:43.215007 864 log.go:181] (0x4000294000) Data frame received for 5\nI0113 06:51:43.215118 864 log.go:181] (0x40008921e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30424/\nI0113 06:51:43.215293 864 log.go:181] (0x4000294000) Data frame received for 3\nI0113 06:51:43.215558 864 log.go:181] (0x4000d18000) (3) Data frame handling\nI0113 06:51:43.215700 864 log.go:181] (0x40008921e0) (5) Data frame sent\nI0113 06:51:43.215807 864 log.go:181] (0x4000d18000) (3) Data frame sent\nI0113 06:51:43.218438 864 log.go:181] (0x4000294000) Data frame received for 3\nI0113 06:51:43.218518 864 log.go:181] (0x4000d18000) (3) Data frame handling\nI0113 06:51:43.218619 864 log.go:181] (0x4000d18000) (3) Data frame sent\nI0113 06:51:43.219316 864 log.go:181] (0x4000294000) Data frame received for 5\nI0113 06:51:43.219441 864 log.go:181] (0x40008921e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30424/\nI0113 06:51:43.219562 864 log.go:181] (0x4000294000) Data frame received for 3\nI0113 06:51:43.219732 864 log.go:181] (0x4000d18000) (3) Data frame handling\nI0113 06:51:43.219848 864 log.go:181] (0x40008921e0) (5) Data frame sent\nI0113 06:51:43.219961 864 log.go:181] (0x4000d18000) (3) Data frame sent\nI0113 06:51:43.223944 864 log.go:181] (0x4000294000) Data frame received for 3\nI0113 06:51:43.224032 864 log.go:181] (0x4000d18000) (3) Data frame handling\nI0113 06:51:43.224148 864 log.go:181] (0x4000d18000) (3) Data frame sent\nI0113 06:51:43.224999 864 log.go:181] (0x4000294000) Data frame received for 5\nI0113 06:51:43.225184 864 log.go:181] (0x4000294000) Data frame received for 3\nI0113 06:51:43.225342 864 log.go:181] (0x4000d18000) (3) Data frame handling\nI0113 06:51:43.225453 864 log.go:181] (0x4000d18000) (3) Data frame sent\nI0113 06:51:43.225559 864 log.go:181] (0x40008921e0) (5) Data frame handling\nI0113 06:51:43.225665 864 log.go:181] (0x40008921e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30424/\nI0113 06:51:43.230979 864 log.go:181] (0x4000294000) Data frame received for 3\nI0113 06:51:43.231067 864 log.go:181] (0x4000d18000) (3) Data frame handling\nI0113 06:51:43.231166 864 log.go:181] (0x4000d18000) (3) Data frame sent\nI0113 06:51:43.231608 864 log.go:181] (0x4000294000) Data frame received for 3\nI0113 06:51:43.231732 864 log.go:181] (0x4000d18000) (3) Data frame handling\nI0113 06:51:43.231864 864 log.go:181] (0x4000294000) Data frame received for 5\nI0113 06:51:43.232038 864 log.go:181] (0x40008921e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30424/\nI0113 06:51:43.232239 864 log.go:181] (0x4000d18000) (3) Data frame sent\nI0113 06:51:43.232414 864 log.go:181] (0x40008921e0) (5) Data frame sent\nI0113 06:51:43.236746 864 log.go:181] (0x4000294000) Data frame received for 3\nI0113 06:51:43.237006 864 log.go:181] (0x4000d18000) (3) Data frame handling\nI0113 06:51:43.237182 864 log.go:181] (0x4000d18000) (3) Data frame sent\nI0113 06:51:43.237450 864 log.go:181] (0x4000294000) Data frame received for 3\nI0113 06:51:43.237602 864 log.go:181] (0x4000d18000) (3) Data frame handling\nI0113 06:51:43.237786 864 log.go:181] (0x4000294000) Data frame received for 5\nI0113 06:51:43.237961 864 log.go:181] (0x40008921e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30424/\nI0113 06:51:43.238079 864 log.go:181] (0x4000d18000) (3) Data frame sent\nI0113 06:51:43.238204 864 log.go:181] (0x40008921e0) (5) Data frame sent\nI0113 06:51:43.243650 864 log.go:181] (0x4000294000) Data frame received for 3\nI0113 06:51:43.243735 864 log.go:181] (0x4000d18000) (3) Data frame handling\nI0113 06:51:43.243822 864 log.go:181] (0x4000d18000) (3) Data frame sent\nI0113 06:51:43.244529 864 log.go:181] (0x4000294000) Data frame received for 5\nI0113 06:51:43.244685 864 log.go:181] (0x40008921e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30424/\nI0113 06:51:43.244803 864 log.go:181] (0x4000294000) Data frame received for 3\nI0113 06:51:43.244958 864 log.go:181] (0x4000d18000) (3) Data frame handling\nI0113 06:51:43.245066 864 log.go:181] (0x40008921e0) (5) Data frame sent\nI0113 06:51:43.245211 864 log.go:181] (0x4000d18000) (3) Data frame sent\nI0113 06:51:43.249527 864 log.go:181] (0x4000294000) Data frame received for 3\nI0113 06:51:43.249653 864 log.go:181] (0x4000d18000) (3) Data frame handling\nI0113 06:51:43.249815 864 log.go:181] (0x4000d18000) (3) Data frame sent\nI0113 06:51:43.250917 864 log.go:181] (0x4000294000) Data frame received for 5\nI0113 06:51:43.251070 864 log.go:181] (0x40008921e0) (5) Data frame handling\n+ echo\n+ I0113 06:51:43.251206 864 log.go:181] (0x4000294000) Data frame received for 3\nI0113 06:51:43.251320 864 log.go:181] (0x4000d18000) (3) Data frame handling\nI0113 06:51:43.251439 864 log.go:181] (0x40008921e0) (5) Data frame sent\nI0113 06:51:43.251592 864 log.go:181] (0x4000294000) Data frame received for 5\nI0113 06:51:43.251706 864 log.go:181] (0x40008921e0) (5) Data frame handling\ncurl -q -s --connect-timeout 2 http://172.18.0.13:30424/\nI0113 06:51:43.251811 864 log.go:181] (0x4000d18000) (3) Data frame sent\nI0113 06:51:43.251897 864 log.go:181] (0x40008921e0) (5) Data frame sent\nI0113 06:51:43.254521 864 log.go:181] (0x4000294000) Data frame received for 3\nI0113 06:51:43.254681 864 log.go:181] (0x4000d18000) (3) Data frame handling\nI0113 06:51:43.254823 864 log.go:181] (0x4000d18000) (3) Data frame sent\nI0113 06:51:43.255648 864 log.go:181] (0x4000294000) Data frame received for 3\nI0113 06:51:43.255780 864 log.go:181] (0x4000d18000) (3) Data frame handling\nI0113 06:51:43.255885 864 log.go:181] (0x4000d18000) (3) Data frame sent\nI0113 06:51:43.255980 864 log.go:181] (0x4000294000) Data frame received for 5\nI0113 06:51:43.256092 864 log.go:181] (0x40008921e0) (5) Data frame handling\nI0113 06:51:43.256231 864 log.go:181] (0x40008921e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2I0113 06:51:43.256347 864 log.go:181] (0x4000294000) Data frame received for 5\nI0113 06:51:43.256465 864 log.go:181] (0x40008921e0) (5) Data frame handling\nI0113 06:51:43.256589 864 log.go:181] (0x40008921e0) (5) Data frame sent\n http://172.18.0.13:30424/\nI0113 06:51:43.260208 864 log.go:181] (0x4000294000) Data frame received for 3\nI0113 06:51:43.260409 864 log.go:181] (0x4000d18000) (3) Data frame handling\nI0113 06:51:43.260535 864 log.go:181] (0x4000d18000) (3) Data frame sent\nI0113 06:51:43.261270 864 log.go:181] (0x4000294000) Data frame received for 5\nI0113 06:51:43.261393 864 log.go:181] (0x40008921e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30424/\nI0113 06:51:43.261535 864 log.go:181] (0x4000294000) Data frame received for 3\nI0113 06:51:43.261685 864 log.go:181] (0x4000d18000) (3) Data frame handling\nI0113 06:51:43.261819 864 log.go:181] (0x40008921e0) (5) Data frame sent\nI0113 06:51:43.261964 864 log.go:181] (0x4000d18000) (3) Data frame sent\nI0113 06:51:43.265211 864 log.go:181] (0x4000294000) Data frame received for 3\nI0113 06:51:43.265371 864 log.go:181] (0x4000d18000) (3) Data frame handling\nI0113 06:51:43.265554 864 log.go:181] (0x4000d18000) (3) Data frame sent\nI0113 06:51:43.265765 864 log.go:181] (0x4000294000) Data frame received for 3\nI0113 06:51:43.265864 864 log.go:181] (0x4000d18000) (3) Data frame handling\nI0113 06:51:43.265940 864 log.go:181] (0x4000d18000) (3) Data frame sent\nI0113 06:51:43.266007 864 log.go:181] (0x4000294000) Data frame received for 5\nI0113 06:51:43.266078 864 log.go:181] (0x40008921e0) (5) Data frame handling\nI0113 06:51:43.266166 864 log.go:181] (0x40008921e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30424/\nI0113 06:51:43.269185 864 log.go:181] (0x4000294000) Data frame received for 3\nI0113 06:51:43.269264 864 log.go:181] (0x4000d18000) (3) Data frame handling\nI0113 06:51:43.269345 864 log.go:181] (0x4000d18000) (3) Data frame sent\nI0113 06:51:43.270264 864 log.go:181] (0x4000294000) Data frame received for 5\nI0113 06:51:43.270400 864 log.go:181] (0x40008921e0) (5) Data frame handling\nI0113 06:51:43.270492 864 log.go:181] (0x40008921e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30424/\nI0113 06:51:43.270561 864 log.go:181] (0x4000294000) Data frame received for 3\nI0113 06:51:43.270626 864 log.go:181] (0x4000d18000) (3) Data frame handling\nI0113 06:51:43.270697 864 log.go:181] (0x4000d18000) (3) Data frame sent\nI0113 06:51:43.274973 864 log.go:181] (0x4000294000) Data frame received for 3\nI0113 06:51:43.275041 864 log.go:181] (0x4000d18000) (3) Data frame handling\nI0113 06:51:43.275113 864 log.go:181] (0x4000d18000) (3) Data frame sent\nI0113 06:51:43.275934 864 log.go:181] (0x4000294000) Data frame received for 3\nI0113 06:51:43.276092 864 log.go:181] (0x4000d18000) (3) Data frame handling\nI0113 06:51:43.276233 864 log.go:181] (0x4000d18000) (3) Data frame sent\nI0113 06:51:43.276353 864 log.go:181] (0x4000294000) Data frame received for 5\nI0113 06:51:43.276472 864 log.go:181] (0x40008921e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30424/\nI0113 06:51:43.276599 864 log.go:181] (0x40008921e0) (5) Data frame sent\nI0113 06:51:43.280474 864 log.go:181] (0x4000294000) Data frame received for 3\nI0113 06:51:43.280612 864 log.go:181] (0x4000d18000) (3) Data frame handling\nI0113 06:51:43.280789 864 log.go:181] (0x4000294000) Data frame received for 5\nI0113 06:51:43.281043 864 log.go:181] (0x40008921e0) (5) Data frame handling\nI0113 06:51:43.281209 864 log.go:181] (0x40008921e0) (5) Data frame sent\nI0113 06:51:43.281545 864 log.go:181] (0x4000d18000) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30424/\nI0113 06:51:43.281733 864 log.go:181] (0x4000294000) Data frame received for 3\nI0113 06:51:43.281868 864 log.go:181] (0x4000d18000) (3) Data frame handling\nI0113 06:51:43.282009 864 log.go:181] (0x4000d18000) (3) Data frame sent\nI0113 06:51:43.287685 864 log.go:181] (0x4000294000) Data frame received for 3\nI0113 06:51:43.287833 864 log.go:181] (0x4000d18000) (3) Data frame handling\nI0113 06:51:43.288032 864 log.go:181] (0x4000d18000) (3) Data frame sent\nI0113 06:51:43.288231 864 log.go:181] (0x4000294000) Data frame received for 5\nI0113 06:51:43.288331 864 log.go:181] (0x40008921e0) (5) Data frame handling\nI0113 06:51:43.288624 864 log.go:181] (0x4000294000) Data frame received for 3\nI0113 06:51:43.288787 864 log.go:181] (0x4000d18000) (3) Data frame handling\nI0113 06:51:43.290506 864 log.go:181] (0x4000294000) Data frame received for 1\nI0113 06:51:43.290628 864 log.go:181] (0x4000917180) (1) Data frame handling\nI0113 06:51:43.290777 864 log.go:181] (0x4000917180) (1) Data frame sent\nI0113 06:51:43.292064 864 log.go:181] (0x4000294000) (0x4000917180) Stream removed, broadcasting: 1\nI0113 06:51:43.295771 864 log.go:181] (0x4000294000) Go away received\nI0113 06:51:43.300052 864 log.go:181] (0x4000294000) (0x4000917180) Stream removed, broadcasting: 1\nI0113 06:51:43.300500 864 log.go:181] (0x4000294000) (0x4000d18000) Stream removed, broadcasting: 3\nI0113 06:51:43.300791 864 log.go:181] (0x4000294000) (0x40008921e0) Stream removed, broadcasting: 5\n" Jan 13 06:51:43.313: INFO: stdout: "\naffinity-nodeport-transition-26r4x\naffinity-nodeport-transition-26r4x\naffinity-nodeport-transition-26r4x\naffinity-nodeport-transition-26r4x\naffinity-nodeport-transition-26r4x\naffinity-nodeport-transition-26r4x\naffinity-nodeport-transition-26r4x\naffinity-nodeport-transition-26r4x\naffinity-nodeport-transition-26r4x\naffinity-nodeport-transition-26r4x\naffinity-nodeport-transition-26r4x\naffinity-nodeport-transition-26r4x\naffinity-nodeport-transition-26r4x\naffinity-nodeport-transition-26r4x\naffinity-nodeport-transition-26r4x\naffinity-nodeport-transition-26r4x" Jan 13 06:51:43.313: INFO: Received response from host: affinity-nodeport-transition-26r4x Jan 13 06:51:43.313: INFO: Received response from host: affinity-nodeport-transition-26r4x Jan 13 06:51:43.313: INFO: Received response from host: affinity-nodeport-transition-26r4x Jan 13 06:51:43.313: INFO: Received response from host: affinity-nodeport-transition-26r4x Jan 13 06:51:43.313: INFO: Received response from host: affinity-nodeport-transition-26r4x Jan 13 06:51:43.314: INFO: Received response from host: affinity-nodeport-transition-26r4x Jan 13 06:51:43.314: INFO: Received response from host: affinity-nodeport-transition-26r4x Jan 13 06:51:43.314: INFO: Received response from host: affinity-nodeport-transition-26r4x Jan 13 06:51:43.314: INFO: Received response from host: affinity-nodeport-transition-26r4x Jan 13 06:51:43.314: INFO: Received response from host: affinity-nodeport-transition-26r4x Jan 13 06:51:43.314: INFO: Received response from host: affinity-nodeport-transition-26r4x Jan 13 06:51:43.314: INFO: Received response from host: affinity-nodeport-transition-26r4x Jan 13 06:51:43.314: INFO: Received response from host: affinity-nodeport-transition-26r4x Jan 13 06:51:43.314: INFO: Received response from host: affinity-nodeport-transition-26r4x Jan 13 06:51:43.314: INFO: Received response from host: affinity-nodeport-transition-26r4x Jan 13 06:51:43.314: INFO: Received response from host: affinity-nodeport-transition-26r4x Jan 13 06:51:43.314: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-5012, will wait for the garbage collector to delete the pods Jan 13 06:51:43.426: INFO: Deleting ReplicationController affinity-nodeport-transition took: 16.820932ms Jan 13 06:51:44.027: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 600.678847ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:52:20.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5012" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:61.225 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":309,"completed":78,"skipped":1406,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:52:20.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 06:52:20.314: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:52:26.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-762" for this suite. • [SLOW TEST:6.531 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":309,"completed":79,"skipped":1415,"failed":0} SSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:52:26.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating pod test-webserver-7b84d66b-9bbc-480c-8c59-3dd477ac61ac in namespace container-probe-2414 Jan 13 06:52:30.899: INFO: Started pod test-webserver-7b84d66b-9bbc-480c-8c59-3dd477ac61ac in namespace container-probe-2414 STEP: checking the pod's current state and verifying that restartCount is present Jan 13 06:52:30.905: INFO: Initial restart count of pod test-webserver-7b84d66b-9bbc-480c-8c59-3dd477ac61ac is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:56:32.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2414" for this suite. • [SLOW TEST:245.490 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":309,"completed":80,"skipped":1419,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:56:32.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 06:56:32.761: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:56:33.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1294" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":309,"completed":81,"skipped":1461,"failed":0} SSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:56:33.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Performing setup for networking test in namespace pod-network-test-5005 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 13 06:56:33.498: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jan 13 06:56:33.609: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 13 06:56:35.615: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 13 06:56:37.622: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 13 06:56:39.616: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 13 06:56:41.616: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 13 06:56:43.617: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 13 06:56:45.616: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 13 06:56:47.618: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 13 06:56:49.681: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 13 06:56:51.617: INFO: The status of Pod netserver-0 is Running (Ready = true) Jan 13 06:56:51.627: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jan 13 06:56:55.747: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Jan 13 06:56:55.747: INFO: Going to poll 10.244.2.205 on port 8080 at least 0 times, with a maximum of 34 tries before failing Jan 13 06:56:55.752: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.205:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5005 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 06:56:55.752: INFO: >>> kubeConfig: /root/.kube/config I0113 06:56:55.822826 10 log.go:181] (0x40005bb8c0) (0x400052d5e0) Create stream I0113 06:56:55.823000 10 log.go:181] (0x40005bb8c0) (0x400052d5e0) Stream added, broadcasting: 1 I0113 06:56:55.826929 10 log.go:181] (0x40005bb8c0) Reply frame received for 1 I0113 06:56:55.827127 10 log.go:181] (0x40005bb8c0) (0x400052d680) Create stream I0113 06:56:55.827227 10 log.go:181] (0x40005bb8c0) (0x400052d680) Stream added, broadcasting: 3 I0113 06:56:55.828793 10 log.go:181] (0x40005bb8c0) Reply frame received for 3 I0113 06:56:55.829168 10 log.go:181] (0x40005bb8c0) (0x400052d7c0) Create stream I0113 06:56:55.829303 10 log.go:181] (0x40005bb8c0) (0x400052d7c0) Stream added, broadcasting: 5 I0113 06:56:55.831052 10 log.go:181] (0x40005bb8c0) Reply frame received for 5 I0113 06:56:55.895580 10 log.go:181] (0x40005bb8c0) Data frame received for 3 I0113 06:56:55.895777 10 log.go:181] (0x400052d680) (3) Data frame handling I0113 06:56:55.895995 10 log.go:181] (0x400052d680) (3) Data frame sent I0113 06:56:55.896126 10 log.go:181] (0x40005bb8c0) Data frame received for 3 I0113 06:56:55.896262 10 log.go:181] (0x400052d680) (3) Data frame handling I0113 06:56:55.896415 10 log.go:181] (0x40005bb8c0) Data frame received for 5 I0113 06:56:55.896565 10 log.go:181] (0x400052d7c0) (5) Data frame handling I0113 06:56:55.897485 10 log.go:181] (0x40005bb8c0) Data frame received for 1 I0113 06:56:55.897582 10 log.go:181] (0x400052d5e0) (1) Data frame handling I0113 06:56:55.897678 10 log.go:181] (0x400052d5e0) (1) Data frame sent I0113 06:56:55.897779 10 log.go:181] (0x40005bb8c0) (0x400052d5e0) Stream removed, broadcasting: 1 I0113 06:56:55.897919 10 log.go:181] (0x40005bb8c0) Go away received I0113 06:56:55.898202 10 log.go:181] (0x40005bb8c0) (0x400052d5e0) Stream removed, broadcasting: 1 I0113 06:56:55.898314 10 log.go:181] (0x40005bb8c0) (0x400052d680) Stream removed, broadcasting: 3 I0113 06:56:55.898401 10 log.go:181] (0x40005bb8c0) (0x400052d7c0) Stream removed, broadcasting: 5 Jan 13 06:56:55.898: INFO: Found all 1 expected endpoints: [netserver-0] Jan 13 06:56:55.898: INFO: Going to poll 10.244.1.32 on port 8080 at least 0 times, with a maximum of 34 tries before failing Jan 13 06:56:55.903: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.32:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5005 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 06:56:55.903: INFO: >>> kubeConfig: /root/.kube/config I0113 06:56:55.957665 10 log.go:181] (0x40064ce160) (0x400052dc20) Create stream I0113 06:56:55.957825 10 log.go:181] (0x40064ce160) (0x400052dc20) Stream added, broadcasting: 1 I0113 06:56:55.961593 10 log.go:181] (0x40064ce160) Reply frame received for 1 I0113 06:56:55.961862 10 log.go:181] (0x40064ce160) (0x400052dcc0) Create stream I0113 06:56:55.961989 10 log.go:181] (0x40064ce160) (0x400052dcc0) Stream added, broadcasting: 3 I0113 06:56:55.963821 10 log.go:181] (0x40064ce160) Reply frame received for 3 I0113 06:56:55.964011 10 log.go:181] (0x40064ce160) (0x40023da3c0) Create stream I0113 06:56:55.964118 10 log.go:181] (0x40064ce160) (0x40023da3c0) Stream added, broadcasting: 5 I0113 06:56:55.965981 10 log.go:181] (0x40064ce160) Reply frame received for 5 I0113 06:56:56.032245 10 log.go:181] (0x40064ce160) Data frame received for 5 I0113 06:56:56.032423 10 log.go:181] (0x40023da3c0) (5) Data frame handling I0113 06:56:56.032616 10 log.go:181] (0x40064ce160) Data frame received for 3 I0113 06:56:56.032773 10 log.go:181] (0x400052dcc0) (3) Data frame handling I0113 06:56:56.032952 10 log.go:181] (0x400052dcc0) (3) Data frame sent I0113 06:56:56.033099 10 log.go:181] (0x40064ce160) Data frame received for 3 I0113 06:56:56.033262 10 log.go:181] (0x400052dcc0) (3) Data frame handling I0113 06:56:56.034105 10 log.go:181] (0x40064ce160) Data frame received for 1 I0113 06:56:56.034213 10 log.go:181] (0x400052dc20) (1) Data frame handling I0113 06:56:56.034306 10 log.go:181] (0x400052dc20) (1) Data frame sent I0113 06:56:56.034404 10 log.go:181] (0x40064ce160) (0x400052dc20) Stream removed, broadcasting: 1 I0113 06:56:56.034522 10 log.go:181] (0x40064ce160) Go away received I0113 06:56:56.034978 10 log.go:181] (0x40064ce160) (0x400052dc20) Stream removed, broadcasting: 1 I0113 06:56:56.035136 10 log.go:181] (0x40064ce160) (0x400052dcc0) Stream removed, broadcasting: 3 I0113 06:56:56.035234 10 log.go:181] (0x40064ce160) (0x40023da3c0) Stream removed, broadcasting: 5 Jan 13 06:56:56.035: INFO: Found all 1 expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:56:56.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5005" for this suite. • [SLOW TEST:22.649 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:27 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:30 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":82,"skipped":1468,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:56:56.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1520 [It] should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jan 13 06:56:56.151: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-9407 run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine' Jan 13 06:56:57.489: INFO: stderr: "" Jan 13 06:56:57.489: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1524 Jan 13 06:56:57.500: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-9407 delete pods e2e-test-httpd-pod' Jan 13 06:57:10.118: INFO: stderr: "" Jan 13 06:57:10.118: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:57:10.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9407" for this suite. • [SLOW TEST:14.076 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1517 should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":309,"completed":83,"skipped":1502,"failed":0} S ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:57:10.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Jan 13 06:57:10.327: INFO: Pod name pod-release: Found 0 pods out of 1 Jan 13 06:57:15.336: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:57:15.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2258" for this suite. • [SLOW TEST:5.341 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":309,"completed":84,"skipped":1503,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:57:15.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir 0777 on tmpfs Jan 13 06:57:15.899: INFO: Waiting up to 5m0s for pod "pod-554ee53d-9f8d-4fad-91e7-bd13a66916bd" in namespace "emptydir-6457" to be "Succeeded or Failed" Jan 13 06:57:15.962: INFO: Pod "pod-554ee53d-9f8d-4fad-91e7-bd13a66916bd": Phase="Pending", Reason="", readiness=false. Elapsed: 62.806018ms Jan 13 06:57:18.370: INFO: Pod "pod-554ee53d-9f8d-4fad-91e7-bd13a66916bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.470430804s Jan 13 06:57:20.393: INFO: Pod "pod-554ee53d-9f8d-4fad-91e7-bd13a66916bd": Phase="Running", Reason="", readiness=true. Elapsed: 4.493549814s Jan 13 06:57:22.400: INFO: Pod "pod-554ee53d-9f8d-4fad-91e7-bd13a66916bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.500428225s STEP: Saw pod success Jan 13 06:57:22.400: INFO: Pod "pod-554ee53d-9f8d-4fad-91e7-bd13a66916bd" satisfied condition "Succeeded or Failed" Jan 13 06:57:22.406: INFO: Trying to get logs from node leguer-worker pod pod-554ee53d-9f8d-4fad-91e7-bd13a66916bd container test-container: STEP: delete the pod Jan 13 06:57:22.501: INFO: Waiting for pod pod-554ee53d-9f8d-4fad-91e7-bd13a66916bd to disappear Jan 13 06:57:22.507: INFO: Pod pod-554ee53d-9f8d-4fad-91e7-bd13a66916bd no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:57:22.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6457" for this suite. • [SLOW TEST:7.042 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":85,"skipped":1532,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:57:22.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 13 06:57:24.285: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 13 06:57:26.306: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746117844, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746117844, loc:(*time.Location)(0x7089440)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746117844, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746117844, loc:(*time.Location)(0x7089440)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 13 06:57:29.340: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:57:29.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8488" for this suite. STEP: Destroying namespace "webhook-8488-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:7.024 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":309,"completed":86,"skipped":1554,"failed":0} [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:57:29.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-7003 [It] should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating statefulset ss in namespace statefulset-7003 Jan 13 06:57:29.790: INFO: Found 0 stateful pods, waiting for 1 Jan 13 06:57:39.818: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jan 13 06:57:39.852: INFO: Deleting all statefulset in ns statefulset-7003 Jan 13 06:57:39.910: INFO: Scaling statefulset ss to 0 Jan 13 06:58:00.016: INFO: Waiting for statefulset status.replicas updated to 0 Jan 13 06:58:00.023: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:58:00.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7003" for this suite. • [SLOW TEST:30.531 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":309,"completed":87,"skipped":1554,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:58:00.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a service externalname-service with the type=ExternalName in namespace services-7919 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-7919 I0113 06:58:00.270613 10 runners.go:190] Created replication controller with name: externalname-service, namespace: services-7919, replica count: 2 I0113 06:58:03.321960 10 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0113 06:58:06.322796 10 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 13 06:58:06.323: INFO: Creating new exec pod Jan 13 06:58:11.421: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-7919 exec execpodxl9cs -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Jan 13 06:58:12.966: INFO: stderr: "I0113 06:58:12.862267 924 log.go:181] (0x40001bcc60) (0x40007b0320) Create stream\nI0113 06:58:12.864650 924 log.go:181] (0x40001bcc60) (0x40007b0320) Stream added, broadcasting: 1\nI0113 06:58:12.875619 924 log.go:181] (0x40001bcc60) Reply frame received for 1\nI0113 06:58:12.876220 924 log.go:181] (0x40001bcc60) (0x40007b03c0) Create stream\nI0113 06:58:12.876275 924 log.go:181] (0x40001bcc60) (0x40007b03c0) Stream added, broadcasting: 3\nI0113 06:58:12.877885 924 log.go:181] (0x40001bcc60) Reply frame received for 3\nI0113 06:58:12.878221 924 log.go:181] (0x40001bcc60) (0x4000b12000) Create stream\nI0113 06:58:12.878297 924 log.go:181] (0x40001bcc60) (0x4000b12000) Stream added, broadcasting: 5\nI0113 06:58:12.879520 924 log.go:181] (0x40001bcc60) Reply frame received for 5\nI0113 06:58:12.942973 924 log.go:181] (0x40001bcc60) Data frame received for 5\nI0113 06:58:12.943528 924 log.go:181] (0x40001bcc60) Data frame received for 3\nI0113 06:58:12.943711 924 log.go:181] (0x40007b03c0) (3) Data frame handling\nI0113 06:58:12.943863 924 log.go:181] (0x4000b12000) (5) Data frame handling\nI0113 06:58:12.944313 924 log.go:181] (0x40001bcc60) Data frame received for 1\nI0113 06:58:12.944469 924 log.go:181] (0x40007b0320) (1) Data frame handling\nI0113 06:58:12.946311 924 log.go:181] (0x4000b12000) (5) Data frame sent\nI0113 06:58:12.946497 924 log.go:181] (0x40001bcc60) Data frame received for 5\nI0113 06:58:12.946583 924 log.go:181] (0x4000b12000) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nI0113 06:58:12.947629 924 log.go:181] (0x40007b0320) (1) Data frame sent\nI0113 06:58:12.949192 924 log.go:181] (0x4000b12000) (5) Data frame sent\nI0113 06:58:12.949319 924 log.go:181] (0x40001bcc60) Data frame received for 5\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0113 06:58:12.949413 924 log.go:181] (0x4000b12000) (5) Data frame handling\nI0113 06:58:12.950843 924 log.go:181] (0x40001bcc60) (0x40007b0320) Stream removed, broadcasting: 1\nI0113 06:58:12.953574 924 log.go:181] (0x40001bcc60) Go away received\nI0113 06:58:12.956688 924 log.go:181] (0x40001bcc60) (0x40007b0320) Stream removed, broadcasting: 1\nI0113 06:58:12.957520 924 log.go:181] (0x40001bcc60) (0x40007b03c0) Stream removed, broadcasting: 3\nI0113 06:58:12.957775 924 log.go:181] (0x40001bcc60) (0x4000b12000) Stream removed, broadcasting: 5\n" Jan 13 06:58:12.967: INFO: stdout: "" Jan 13 06:58:12.973: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-7919 exec execpodxl9cs -- /bin/sh -x -c nc -zv -t -w 2 10.96.251.250 80' Jan 13 06:58:14.621: INFO: stderr: "I0113 06:58:14.496387 944 log.go:181] (0x4000166000) (0x4000018140) Create stream\nI0113 06:58:14.502044 944 log.go:181] (0x4000166000) (0x4000018140) Stream added, broadcasting: 1\nI0113 06:58:14.515600 944 log.go:181] (0x4000166000) Reply frame received for 1\nI0113 06:58:14.516447 944 log.go:181] (0x4000166000) (0x4000a721e0) Create stream\nI0113 06:58:14.516527 944 log.go:181] (0x4000166000) (0x4000a721e0) Stream added, broadcasting: 3\nI0113 06:58:14.518094 944 log.go:181] (0x4000166000) Reply frame received for 3\nI0113 06:58:14.518307 944 log.go:181] (0x4000166000) (0x4000018780) Create stream\nI0113 06:58:14.518361 944 log.go:181] (0x4000166000) (0x4000018780) Stream added, broadcasting: 5\nI0113 06:58:14.519263 944 log.go:181] (0x4000166000) Reply frame received for 5\nI0113 06:58:14.604628 944 log.go:181] (0x4000166000) Data frame received for 3\nI0113 06:58:14.605014 944 log.go:181] (0x4000166000) Data frame received for 1\nI0113 06:58:14.605179 944 log.go:181] (0x4000a721e0) (3) Data frame handling\nI0113 06:58:14.605379 944 log.go:181] (0x4000166000) Data frame received for 5\nI0113 06:58:14.605512 944 log.go:181] (0x4000018780) (5) Data frame handling\nI0113 06:58:14.605651 944 log.go:181] (0x4000018140) (1) Data frame handling\nI0113 06:58:14.607122 944 log.go:181] (0x4000018140) (1) Data frame sent\nI0113 06:58:14.607222 944 log.go:181] (0x4000018780) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.251.250 80\nConnection to 10.96.251.250 80 port [tcp/http] succeeded!\nI0113 06:58:14.607589 944 log.go:181] (0x4000166000) Data frame received for 5\nI0113 06:58:14.607679 944 log.go:181] (0x4000018780) (5) Data frame handling\nI0113 06:58:14.609085 944 log.go:181] (0x4000166000) (0x4000018140) Stream removed, broadcasting: 1\nI0113 06:58:14.610736 944 log.go:181] (0x4000166000) Go away received\nI0113 06:58:14.613379 944 log.go:181] (0x4000166000) (0x4000018140) Stream removed, broadcasting: 1\nI0113 06:58:14.613990 944 log.go:181] (0x4000166000) (0x4000a721e0) Stream removed, broadcasting: 3\nI0113 06:58:14.614185 944 log.go:181] (0x4000166000) (0x4000018780) Stream removed, broadcasting: 5\n" Jan 13 06:58:14.622: INFO: stdout: "" Jan 13 06:58:14.622: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-7919 exec execpodxl9cs -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.13 30660' Jan 13 06:58:16.218: INFO: stderr: "I0113 06:58:16.094320 964 log.go:181] (0x400003b1e0) (0x40003b5680) Create stream\nI0113 06:58:16.100200 964 log.go:181] (0x400003b1e0) (0x40003b5680) Stream added, broadcasting: 1\nI0113 06:58:16.112026 964 log.go:181] (0x400003b1e0) Reply frame received for 1\nI0113 06:58:16.112644 964 log.go:181] (0x400003b1e0) (0x4000a0e1e0) Create stream\nI0113 06:58:16.112705 964 log.go:181] (0x400003b1e0) (0x4000a0e1e0) Stream added, broadcasting: 3\nI0113 06:58:16.114397 964 log.go:181] (0x400003b1e0) Reply frame received for 3\nI0113 06:58:16.114833 964 log.go:181] (0x400003b1e0) (0x40000ca3c0) Create stream\nI0113 06:58:16.114931 964 log.go:181] (0x400003b1e0) (0x40000ca3c0) Stream added, broadcasting: 5\nI0113 06:58:16.116163 964 log.go:181] (0x400003b1e0) Reply frame received for 5\nI0113 06:58:16.196470 964 log.go:181] (0x400003b1e0) Data frame received for 5\nI0113 06:58:16.196776 964 log.go:181] (0x400003b1e0) Data frame received for 3\nI0113 06:58:16.197036 964 log.go:181] (0x4000a0e1e0) (3) Data frame handling\nI0113 06:58:16.197249 964 log.go:181] (0x40000ca3c0) (5) Data frame handling\nI0113 06:58:16.197631 964 log.go:181] (0x400003b1e0) Data frame received for 1\nI0113 06:58:16.197757 964 log.go:181] (0x40003b5680) (1) Data frame handling\nI0113 06:58:16.200058 964 log.go:181] (0x40003b5680) (1) Data frame sent\nI0113 06:58:16.200515 964 log.go:181] (0x40000ca3c0) (5) Data frame sent\nI0113 06:58:16.200654 964 log.go:181] (0x400003b1e0) Data frame received for 5\nI0113 06:58:16.200764 964 log.go:181] (0x40000ca3c0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.13 30660\nConnection to 172.18.0.13 30660 port [tcp/30660] succeeded!\nI0113 06:58:16.201480 964 log.go:181] (0x400003b1e0) (0x40003b5680) Stream removed, broadcasting: 1\nI0113 06:58:16.206730 964 log.go:181] (0x400003b1e0) Go away received\nI0113 06:58:16.210158 964 log.go:181] (0x400003b1e0) (0x40003b5680) Stream removed, broadcasting: 1\nI0113 06:58:16.210465 964 log.go:181] (0x400003b1e0) (0x4000a0e1e0) Stream removed, broadcasting: 3\nI0113 06:58:16.210666 964 log.go:181] (0x400003b1e0) (0x40000ca3c0) Stream removed, broadcasting: 5\n" Jan 13 06:58:16.219: INFO: stdout: "" Jan 13 06:58:16.220: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-7919 exec execpodxl9cs -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.12 30660' Jan 13 06:58:17.719: INFO: stderr: "I0113 06:58:17.622115 984 log.go:181] (0x4000d16000) (0x4000724000) Create stream\nI0113 06:58:17.624611 984 log.go:181] (0x4000d16000) (0x4000724000) Stream added, broadcasting: 1\nI0113 06:58:17.636998 984 log.go:181] (0x4000d16000) Reply frame received for 1\nI0113 06:58:17.637621 984 log.go:181] (0x4000d16000) (0x40007240a0) Create stream\nI0113 06:58:17.637677 984 log.go:181] (0x4000d16000) (0x40007240a0) Stream added, broadcasting: 3\nI0113 06:58:17.639177 984 log.go:181] (0x4000d16000) Reply frame received for 3\nI0113 06:58:17.639545 984 log.go:181] (0x4000d16000) (0x4000f000a0) Create stream\nI0113 06:58:17.639617 984 log.go:181] (0x4000d16000) (0x4000f000a0) Stream added, broadcasting: 5\nI0113 06:58:17.640952 984 log.go:181] (0x4000d16000) Reply frame received for 5\nI0113 06:58:17.701854 984 log.go:181] (0x4000d16000) Data frame received for 5\nI0113 06:58:17.703008 984 log.go:181] (0x4000d16000) Data frame received for 3\nI0113 06:58:17.703230 984 log.go:181] (0x40007240a0) (3) Data frame handling\nI0113 06:58:17.705536 984 log.go:181] (0x4000d16000) Data frame received for 1\nI0113 06:58:17.705713 984 log.go:181] (0x4000f000a0) (5) Data frame handling\nI0113 06:58:17.705884 984 log.go:181] (0x4000724000) (1) Data frame handling\n+ nc -zv -t -w 2 172.18.0.12 30660\nConnection to 172.18.0.12 30660 port [tcp/30660] succeeded!\nI0113 06:58:17.707938 984 log.go:181] (0x4000724000) (1) Data frame sent\nI0113 06:58:17.708136 984 log.go:181] (0x4000f000a0) (5) Data frame sent\nI0113 06:58:17.708432 984 log.go:181] (0x4000d16000) Data frame received for 5\nI0113 06:58:17.708559 984 log.go:181] (0x4000f000a0) (5) Data frame handling\nI0113 06:58:17.708813 984 log.go:181] (0x4000d16000) (0x4000724000) Stream removed, broadcasting: 1\nI0113 06:58:17.709411 984 log.go:181] (0x4000d16000) Go away received\nI0113 06:58:17.711897 984 log.go:181] (0x4000d16000) (0x4000724000) Stream removed, broadcasting: 1\nI0113 06:58:17.712103 984 log.go:181] (0x4000d16000) (0x40007240a0) Stream removed, broadcasting: 3\nI0113 06:58:17.712241 984 log.go:181] (0x4000d16000) (0x4000f000a0) Stream removed, broadcasting: 5\n" Jan 13 06:58:17.720: INFO: stdout: "" Jan 13 06:58:17.720: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:58:17.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7919" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:17.729 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":309,"completed":88,"skipped":1574,"failed":0} SSSSSSS ------------------------------ [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:58:17.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating service in namespace services-8103 STEP: creating service affinity-nodeport in namespace services-8103 STEP: creating replication controller affinity-nodeport in namespace services-8103 I0113 06:58:18.008520 10 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-8103, replica count: 3 I0113 06:58:21.059944 10 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0113 06:58:24.060707 10 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0113 06:58:27.061415 10 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 13 06:58:27.091: INFO: Creating new exec pod Jan 13 06:58:32.139: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-8103 exec execpod-affinity9h6qx -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport 80' Jan 13 06:58:33.720: INFO: stderr: "I0113 06:58:33.593731 1004 log.go:181] (0x400003a840) (0x4000ae2460) Create stream\nI0113 06:58:33.597822 1004 log.go:181] (0x400003a840) (0x4000ae2460) Stream added, broadcasting: 1\nI0113 06:58:33.609987 1004 log.go:181] (0x400003a840) Reply frame received for 1\nI0113 06:58:33.610914 1004 log.go:181] (0x400003a840) (0x4000ae2500) Create stream\nI0113 06:58:33.610997 1004 log.go:181] (0x400003a840) (0x4000ae2500) Stream added, broadcasting: 3\nI0113 06:58:33.612462 1004 log.go:181] (0x400003a840) Reply frame received for 3\nI0113 06:58:33.612733 1004 log.go:181] (0x400003a840) (0x4000b8e000) Create stream\nI0113 06:58:33.612794 1004 log.go:181] (0x400003a840) (0x4000b8e000) Stream added, broadcasting: 5\nI0113 06:58:33.613906 1004 log.go:181] (0x400003a840) Reply frame received for 5\nI0113 06:58:33.699901 1004 log.go:181] (0x400003a840) Data frame received for 3\nI0113 06:58:33.700360 1004 log.go:181] (0x4000ae2500) (3) Data frame handling\nI0113 06:58:33.700583 1004 log.go:181] (0x400003a840) Data frame received for 1\nI0113 06:58:33.700714 1004 log.go:181] (0x4000ae2460) (1) Data frame handling\nI0113 06:58:33.701352 1004 log.go:181] (0x400003a840) Data frame received for 5\nI0113 06:58:33.701489 1004 log.go:181] (0x4000b8e000) (5) Data frame handling\nI0113 06:58:33.702809 1004 log.go:181] (0x4000ae2460) (1) Data frame sent\nI0113 06:58:33.704430 1004 log.go:181] (0x4000b8e000) (5) Data frame sent\nI0113 06:58:33.704552 1004 log.go:181] (0x400003a840) Data frame received for 5\n+ nc -zv -t -w 2 affinity-nodeport 80\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\nI0113 06:58:33.705532 1004 log.go:181] (0x400003a840) (0x4000ae2460) Stream removed, broadcasting: 1\nI0113 06:58:33.707298 1004 log.go:181] (0x4000b8e000) (5) Data frame handling\nI0113 06:58:33.708313 1004 log.go:181] (0x400003a840) Go away received\nI0113 06:58:33.711739 1004 log.go:181] (0x400003a840) (0x4000ae2460) Stream removed, broadcasting: 1\nI0113 06:58:33.712329 1004 log.go:181] (0x400003a840) (0x4000ae2500) Stream removed, broadcasting: 3\nI0113 06:58:33.712569 1004 log.go:181] (0x400003a840) (0x4000b8e000) Stream removed, broadcasting: 5\n" Jan 13 06:58:33.722: INFO: stdout: "" Jan 13 06:58:33.727: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-8103 exec execpod-affinity9h6qx -- /bin/sh -x -c nc -zv -t -w 2 10.96.35.23 80' Jan 13 06:58:35.285: INFO: stderr: "I0113 06:58:35.173831 1024 log.go:181] (0x400003a210) (0x4000822320) Create stream\nI0113 06:58:35.176444 1024 log.go:181] (0x400003a210) (0x4000822320) Stream added, broadcasting: 1\nI0113 06:58:35.187669 1024 log.go:181] (0x400003a210) Reply frame received for 1\nI0113 06:58:35.188413 1024 log.go:181] (0x400003a210) (0x40002b8000) Create stream\nI0113 06:58:35.188488 1024 log.go:181] (0x400003a210) (0x40002b8000) Stream added, broadcasting: 3\nI0113 06:58:35.189867 1024 log.go:181] (0x400003a210) Reply frame received for 3\nI0113 06:58:35.190107 1024 log.go:181] (0x400003a210) (0x4000487f40) Create stream\nI0113 06:58:35.190161 1024 log.go:181] (0x400003a210) (0x4000487f40) Stream added, broadcasting: 5\nI0113 06:58:35.191424 1024 log.go:181] (0x400003a210) Reply frame received for 5\nI0113 06:58:35.264424 1024 log.go:181] (0x400003a210) Data frame received for 3\nI0113 06:58:35.265056 1024 log.go:181] (0x40002b8000) (3) Data frame handling\nI0113 06:58:35.265373 1024 log.go:181] (0x400003a210) Data frame received for 5\nI0113 06:58:35.265555 1024 log.go:181] (0x4000487f40) (5) Data frame handling\nI0113 06:58:35.266070 1024 log.go:181] (0x400003a210) Data frame received for 1\nI0113 06:58:35.266202 1024 log.go:181] (0x4000822320) (1) Data frame handling\n+ nc -zv -t -w 2 10.96.35.23 80\nConnection to 10.96.35.23 80 port [tcp/http] succeeded!\nI0113 06:58:35.267996 1024 log.go:181] (0x4000822320) (1) Data frame sent\nI0113 06:58:35.268248 1024 log.go:181] (0x4000487f40) (5) Data frame sent\nI0113 06:58:35.268521 1024 log.go:181] (0x400003a210) Data frame received for 5\nI0113 06:58:35.268613 1024 log.go:181] (0x4000487f40) (5) Data frame handling\nI0113 06:58:35.269884 1024 log.go:181] (0x400003a210) (0x4000822320) Stream removed, broadcasting: 1\nI0113 06:58:35.272590 1024 log.go:181] (0x400003a210) Go away received\nI0113 06:58:35.276379 1024 log.go:181] (0x400003a210) (0x4000822320) Stream removed, broadcasting: 1\nI0113 06:58:35.276752 1024 log.go:181] (0x400003a210) (0x40002b8000) Stream removed, broadcasting: 3\nI0113 06:58:35.277088 1024 log.go:181] (0x400003a210) (0x4000487f40) Stream removed, broadcasting: 5\n" Jan 13 06:58:35.286: INFO: stdout: "" Jan 13 06:58:35.286: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-8103 exec execpod-affinity9h6qx -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.13 31537' Jan 13 06:58:36.929: INFO: stderr: "I0113 06:58:36.795427 1044 log.go:181] (0x40001a6210) (0x4000372000) Create stream\nI0113 06:58:36.798205 1044 log.go:181] (0x40001a6210) (0x4000372000) Stream added, broadcasting: 1\nI0113 06:58:36.810157 1044 log.go:181] (0x40001a6210) Reply frame received for 1\nI0113 06:58:36.810867 1044 log.go:181] (0x40001a6210) (0x40003720a0) Create stream\nI0113 06:58:36.810954 1044 log.go:181] (0x40001a6210) (0x40003720a0) Stream added, broadcasting: 3\nI0113 06:58:36.812698 1044 log.go:181] (0x40001a6210) Reply frame received for 3\nI0113 06:58:36.813125 1044 log.go:181] (0x40001a6210) (0x40007885a0) Create stream\nI0113 06:58:36.813209 1044 log.go:181] (0x40001a6210) (0x40007885a0) Stream added, broadcasting: 5\nI0113 06:58:36.814523 1044 log.go:181] (0x40001a6210) Reply frame received for 5\nI0113 06:58:36.909439 1044 log.go:181] (0x40001a6210) Data frame received for 5\nI0113 06:58:36.909817 1044 log.go:181] (0x40007885a0) (5) Data frame handling\nI0113 06:58:36.910667 1044 log.go:181] (0x40001a6210) Data frame received for 3\nI0113 06:58:36.910825 1044 log.go:181] (0x40003720a0) (3) Data frame handling\nI0113 06:58:36.911819 1044 log.go:181] (0x40001a6210) Data frame received for 1\nI0113 06:58:36.911916 1044 log.go:181] (0x4000372000) (1) Data frame handling\nI0113 06:58:36.912016 1044 log.go:181] (0x4000372000) (1) Data frame sent\n+ nc -zv -t -w 2 172.18.0.13 31537\nI0113 06:58:36.912496 1044 log.go:181] (0x40007885a0) (5) Data frame sent\nI0113 06:58:36.913208 1044 log.go:181] (0x40001a6210) Data frame received for 5\nI0113 06:58:36.913289 1044 log.go:181] (0x40007885a0) (5) Data frame handling\nI0113 06:58:36.913365 1044 log.go:181] (0x40007885a0) (5) Data frame sent\nConnection to 172.18.0.13 31537 port [tcp/31537] succeeded!\nI0113 06:58:36.913434 1044 log.go:181] (0x40001a6210) Data frame received for 5\nI0113 06:58:36.914077 1044 log.go:181] (0x40001a6210) (0x4000372000) Stream removed, broadcasting: 1\nI0113 06:58:36.917986 1044 log.go:181] (0x40007885a0) (5) Data frame handling\nI0113 06:58:36.918220 1044 log.go:181] (0x40001a6210) Go away received\nI0113 06:58:36.919431 1044 log.go:181] (0x40001a6210) (0x4000372000) Stream removed, broadcasting: 1\nI0113 06:58:36.920486 1044 log.go:181] (0x40001a6210) (0x40003720a0) Stream removed, broadcasting: 3\nI0113 06:58:36.921069 1044 log.go:181] (0x40001a6210) (0x40007885a0) Stream removed, broadcasting: 5\n" Jan 13 06:58:36.930: INFO: stdout: "" Jan 13 06:58:36.930: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-8103 exec execpod-affinity9h6qx -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.12 31537' Jan 13 06:58:38.516: INFO: stderr: "I0113 06:58:38.416325 1064 log.go:181] (0x40000c0c60) (0x40008a81e0) Create stream\nI0113 06:58:38.421696 1064 log.go:181] (0x40000c0c60) (0x40008a81e0) Stream added, broadcasting: 1\nI0113 06:58:38.431063 1064 log.go:181] (0x40000c0c60) Reply frame received for 1\nI0113 06:58:38.431628 1064 log.go:181] (0x40000c0c60) (0x40008b7f40) Create stream\nI0113 06:58:38.431692 1064 log.go:181] (0x40000c0c60) (0x40008b7f40) Stream added, broadcasting: 3\nI0113 06:58:38.433204 1064 log.go:181] (0x40000c0c60) Reply frame received for 3\nI0113 06:58:38.433590 1064 log.go:181] (0x40000c0c60) (0x40008c8000) Create stream\nI0113 06:58:38.433688 1064 log.go:181] (0x40000c0c60) (0x40008c8000) Stream added, broadcasting: 5\nI0113 06:58:38.435161 1064 log.go:181] (0x40000c0c60) Reply frame received for 5\nI0113 06:58:38.492146 1064 log.go:181] (0x40000c0c60) Data frame received for 5\nI0113 06:58:38.492670 1064 log.go:181] (0x40000c0c60) Data frame received for 3\nI0113 06:58:38.493315 1064 log.go:181] (0x40008b7f40) (3) Data frame handling\nI0113 06:58:38.493481 1064 log.go:181] (0x40008c8000) (5) Data frame handling\nI0113 06:58:38.493842 1064 log.go:181] (0x40000c0c60) Data frame received for 1\nI0113 06:58:38.494000 1064 log.go:181] (0x40008a81e0) (1) Data frame handling\nI0113 06:58:38.494991 1064 log.go:181] (0x40008a81e0) (1) Data frame sent\n+ nc -zv -t -w 2 172.18.0.12 31537\nConnection to 172.18.0.12 31537 port [tcp/31537] succeeded!\nI0113 06:58:38.495946 1064 log.go:181] (0x40008c8000) (5) Data frame sent\nI0113 06:58:38.496097 1064 log.go:181] (0x40000c0c60) Data frame received for 5\nI0113 06:58:38.496212 1064 log.go:181] (0x40008c8000) (5) Data frame handling\nI0113 06:58:38.498423 1064 log.go:181] (0x40000c0c60) (0x40008a81e0) Stream removed, broadcasting: 1\nI0113 06:58:38.502418 1064 log.go:181] (0x40000c0c60) Go away received\nI0113 06:58:38.506376 1064 log.go:181] (0x40000c0c60) (0x40008a81e0) Stream removed, broadcasting: 1\nI0113 06:58:38.507232 1064 log.go:181] (0x40000c0c60) (0x40008b7f40) Stream removed, broadcasting: 3\nI0113 06:58:38.507556 1064 log.go:181] (0x40000c0c60) (0x40008c8000) Stream removed, broadcasting: 5\n" Jan 13 06:58:38.517: INFO: stdout: "" Jan 13 06:58:38.517: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-8103 exec execpod-affinity9h6qx -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.13:31537/ ; done' Jan 13 06:58:40.410: INFO: stderr: "I0113 06:58:40.173715 1084 log.go:181] (0x4000d9ebb0) (0x4000d08460) Create stream\nI0113 06:58:40.177324 1084 log.go:181] (0x4000d9ebb0) (0x4000d08460) Stream added, broadcasting: 1\nI0113 06:58:40.191134 1084 log.go:181] (0x4000d9ebb0) Reply frame received for 1\nI0113 06:58:40.192301 1084 log.go:181] (0x4000d9ebb0) (0x4000d08500) Create stream\nI0113 06:58:40.192401 1084 log.go:181] (0x4000d9ebb0) (0x4000d08500) Stream added, broadcasting: 3\nI0113 06:58:40.194320 1084 log.go:181] (0x4000d9ebb0) Reply frame received for 3\nI0113 06:58:40.194589 1084 log.go:181] (0x4000d9ebb0) (0x4000c0e000) Create stream\nI0113 06:58:40.194657 1084 log.go:181] (0x4000d9ebb0) (0x4000c0e000) Stream added, broadcasting: 5\nI0113 06:58:40.196090 1084 log.go:181] (0x4000d9ebb0) Reply frame received for 5\nI0113 06:58:40.289617 1084 log.go:181] (0x4000d9ebb0) Data frame received for 3\nI0113 06:58:40.290196 1084 log.go:181] (0x4000d9ebb0) Data frame received for 5\nI0113 06:58:40.290409 1084 log.go:181] (0x4000c0e000) (5) Data frame handling\nI0113 06:58:40.290541 1084 log.go:181] (0x4000d08500) (3) Data frame handling\nI0113 06:58:40.291386 1084 log.go:181] (0x4000d08500) (3) Data frame sent\nI0113 06:58:40.291683 1084 log.go:181] (0x4000c0e000) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:31537/\nI0113 06:58:40.293003 1084 log.go:181] (0x4000d9ebb0) Data frame received for 3\nI0113 06:58:40.293146 1084 log.go:181] (0x4000d08500) (3) Data frame handling\nI0113 06:58:40.293319 1084 log.go:181] (0x4000d08500) (3) Data frame sent\nI0113 06:58:40.293714 1084 log.go:181] (0x4000d9ebb0) Data frame received for 3\nI0113 06:58:40.293882 1084 log.go:181] (0x4000d08500) (3) Data frame handling\nI0113 06:58:40.294010 1084 log.go:181] (0x4000d9ebb0) Data frame received for 5\nI0113 06:58:40.294157 1084 log.go:181] (0x4000c0e000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:31537/\nI0113 06:58:40.294320 1084 log.go:181] (0x4000d08500) (3) Data frame sent\nI0113 06:58:40.294453 1084 log.go:181] (0x4000c0e000) (5) Data frame sent\nI0113 06:58:40.300081 1084 log.go:181] (0x4000d9ebb0) Data frame received for 3\nI0113 06:58:40.300240 1084 log.go:181] (0x4000d08500) (3) Data frame handling\nI0113 06:58:40.300354 1084 log.go:181] (0x4000d9ebb0) Data frame received for 5\nI0113 06:58:40.300453 1084 log.go:181] (0x4000c0e000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:31537/\nI0113 06:58:40.300564 1084 log.go:181] (0x4000d08500) (3) Data frame sent\nI0113 06:58:40.300737 1084 log.go:181] (0x4000d9ebb0) Data frame received for 3\nI0113 06:58:40.301000 1084 log.go:181] (0x4000d08500) (3) Data frame handling\nI0113 06:58:40.301137 1084 log.go:181] (0x4000d08500) (3) Data frame sent\nI0113 06:58:40.301251 1084 log.go:181] (0x4000c0e000) (5) Data frame sent\nI0113 06:58:40.305526 1084 log.go:181] (0x4000d9ebb0) Data frame received for 3\nI0113 06:58:40.305700 1084 log.go:181] (0x4000d08500) (3) Data frame handling\nI0113 06:58:40.305913 1084 log.go:181] (0x4000d08500) (3) Data frame sent\nI0113 06:58:40.306893 1084 log.go:181] (0x4000d9ebb0) Data frame received for 5\nI0113 06:58:40.306965 1084 log.go:181] (0x4000c0e000) (5) Data frame handling\n+ echo\n+ curlI0113 06:58:40.307077 1084 log.go:181] (0x4000d9ebb0) Data frame received for 3\nI0113 06:58:40.307185 1084 log.go:181] (0x4000d08500) (3) Data frame handling\nI0113 06:58:40.307277 1084 log.go:181] (0x4000d08500) (3) Data frame sent\nI0113 06:58:40.307363 1084 log.go:181] (0x4000c0e000) (5) Data frame sent\nI0113 06:58:40.307430 1084 log.go:181] (0x4000d9ebb0) Data frame received for 5\nI0113 06:58:40.307490 1084 log.go:181] (0x4000c0e000) (5) Data frame handling\nI0113 06:58:40.307578 1084 log.go:181] (0x4000c0e000) (5) Data frame sent\n -q -s --connect-timeout 2 http://172.18.0.13:31537/\nI0113 06:58:40.313941 1084 log.go:181] (0x4000d9ebb0) Data frame received for 3\nI0113 06:58:40.314065 1084 log.go:181] (0x4000d08500) (3) Data frame handling\nI0113 06:58:40.314173 1084 log.go:181] (0x4000d08500) (3) Data frame sent\nI0113 06:58:40.314280 1084 log.go:181] (0x4000d9ebb0) Data frame received for 3\nI0113 06:58:40.314377 1084 log.go:181] (0x4000d9ebb0) Data frame received for 5\nI0113 06:58:40.314495 1084 log.go:181] (0x4000c0e000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:31537/I0113 06:58:40.314581 1084 log.go:181] (0x4000d08500) (3) Data frame handling\nI0113 06:58:40.314713 1084 log.go:181] (0x4000c0e000) (5) Data frame sent\nI0113 06:58:40.314809 1084 log.go:181] (0x4000d9ebb0) Data frame received for 5\nI0113 06:58:40.314909 1084 log.go:181] (0x4000c0e000) (5) Data frame handling\nI0113 06:58:40.315004 1084 log.go:181] (0x4000c0e000) (5) Data frame sent\n\nI0113 06:58:40.315084 1084 log.go:181] (0x4000d08500) (3) Data frame sent\nI0113 06:58:40.320277 1084 log.go:181] (0x4000d9ebb0) Data frame received for 3\nI0113 06:58:40.320372 1084 log.go:181] (0x4000d08500) (3) Data frame handling\nI0113 06:58:40.320467 1084 log.go:181] (0x4000d08500) (3) Data frame sent\nI0113 06:58:40.321235 1084 log.go:181] (0x4000d9ebb0) Data frame received for 3\nI0113 06:58:40.321335 1084 log.go:181] (0x4000d08500) (3) Data frame handling\nI0113 06:58:40.321420 1084 log.go:181] (0x4000d08500) (3) Data frame sent\nI0113 06:58:40.321493 1084 log.go:181] (0x4000d9ebb0) Data frame received for 5\nI0113 06:58:40.321565 1084 log.go:181] (0x4000c0e000) (5) Data frame handling\nI0113 06:58:40.321640 1084 log.go:181] (0x4000c0e000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:31537/\nI0113 06:58:40.327516 1084 log.go:181] (0x4000d9ebb0) Data frame received for 3\nI0113 06:58:40.327614 1084 log.go:181] (0x4000d08500) (3) Data frame handling\nI0113 06:58:40.327716 1084 log.go:181] (0x4000d08500) (3) Data frame sent\nI0113 06:58:40.327951 1084 log.go:181] (0x4000d9ebb0) Data frame received for 5\nI0113 06:58:40.328058 1084 log.go:181] (0x4000c0e000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeoutI0113 06:58:40.328155 1084 log.go:181] (0x4000d9ebb0) Data frame received for 3\nI0113 06:58:40.328281 1084 log.go:181] (0x4000d08500) (3) Data frame handling\nI0113 06:58:40.328359 1084 log.go:181] (0x4000c0e000) (5) Data frame sent\nI0113 06:58:40.328459 1084 log.go:181] (0x4000d9ebb0) Data frame received for 5\nI0113 06:58:40.328602 1084 log.go:181] (0x4000c0e000) (5) Data frame handling\nI0113 06:58:40.328744 1084 log.go:181] (0x4000c0e000) (5) Data frame sent\n 2 http://172.18.0.13:31537/\nI0113 06:58:40.328937 1084 log.go:181] (0x4000d08500) (3) Data frame sent\nI0113 06:58:40.333493 1084 log.go:181] (0x4000d9ebb0) Data frame received for 3\nI0113 06:58:40.333615 1084 log.go:181] (0x4000d08500) (3) Data frame handling\nI0113 06:58:40.333717 1084 log.go:181] (0x4000d08500) (3) Data frame sent\nI0113 06:58:40.334035 1084 log.go:181] (0x4000d9ebb0) Data frame received for 5\nI0113 06:58:40.334138 1084 log.go:181] (0x4000c0e000) (5) Data frame handling\nI0113 06:58:40.334232 1084 log.go:181] (0x4000c0e000) (5) Data frame sent\n+ echo\n+ curl -q -sI0113 06:58:40.334327 1084 log.go:181] (0x4000d9ebb0) Data frame received for 5\nI0113 06:58:40.334405 1084 log.go:181] (0x4000d9ebb0) Data frame received for 3\nI0113 06:58:40.334517 1084 log.go:181] (0x4000d08500) (3) Data frame handling\nI0113 06:58:40.334613 1084 log.go:181] (0x4000c0e000) (5) Data frame handling\n --connect-timeout 2 http://172.18.0.13:31537/\nI0113 06:58:40.334733 1084 log.go:181] (0x4000d08500) (3) Data frame sent\nI0113 06:58:40.334834 1084 log.go:181] (0x4000c0e000) (5) Data frame sent\nI0113 06:58:40.338881 1084 log.go:181] (0x4000d9ebb0) Data frame received for 3\nI0113 06:58:40.339002 1084 log.go:181] (0x4000d08500) (3) Data frame handling\nI0113 06:58:40.339143 1084 log.go:181] (0x4000d08500) (3) Data frame sent\nI0113 06:58:40.339825 1084 log.go:181] (0x4000d9ebb0) Data frame received for 5\nI0113 06:58:40.339935 1084 log.go:181] (0x4000d9ebb0) Data frame received for 3\nI0113 06:58:40.340069 1084 log.go:181] (0x4000d08500) (3) Data frame handling\nI0113 06:58:40.340191 1084 log.go:181] (0x4000d08500) (3) Data frame sent\nI0113 06:58:40.340294 1084 log.go:181] (0x4000c0e000) (5) Data frame handling\nI0113 06:58:40.340407 1084 log.go:181] (0x4000c0e000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:31537/I0113 06:58:40.340525 1084 log.go:181] (0x4000d9ebb0) Data frame received for 5\nI0113 06:58:40.340629 1084 log.go:181] (0x4000c0e000) (5) Data frame handling\nI0113 06:58:40.340780 1084 log.go:181] (0x4000c0e000) (5) Data frame sent\n\nI0113 06:58:40.347079 1084 log.go:181] (0x4000d9ebb0) Data frame received for 3\nI0113 06:58:40.347162 1084 log.go:181] (0x4000d08500) (3) Data frame handling\nI0113 06:58:40.347244 1084 log.go:181] (0x4000d08500) (3) Data frame sent\nI0113 06:58:40.347865 1084 log.go:181] (0x4000d9ebb0) Data frame received for 3\nI0113 06:58:40.347990 1084 log.go:181] (0x4000d08500) (3) Data frame handling\nI0113 06:58:40.348084 1084 log.go:181] (0x4000d9ebb0) Data frame received for 5\nI0113 06:58:40.348207 1084 log.go:181] (0x4000c0e000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:31537/\nI0113 06:58:40.348298 1084 log.go:181] (0x4000d08500) (3) Data frame sent\nI0113 06:58:40.348447 1084 log.go:181] (0x4000c0e000) (5) Data frame sent\nI0113 06:58:40.354547 1084 log.go:181] (0x4000d9ebb0) Data frame received for 3\nI0113 06:58:40.354634 1084 log.go:181] (0x4000d08500) (3) Data frame handling\nI0113 06:58:40.354736 1084 log.go:181] (0x4000d08500) (3) Data frame sent\nI0113 06:58:40.355042 1084 log.go:181] (0x4000d9ebb0) Data frame received for 3\nI0113 06:58:40.355163 1084 log.go:181] (0x4000d08500) (3) Data frame handling\nI0113 06:58:40.355277 1084 log.go:181] (0x4000d9ebb0) Data frame received for 5\nI0113 06:58:40.355414 1084 log.go:181] (0x4000c0e000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:31537/\nI0113 06:58:40.355522 1084 log.go:181] (0x4000d08500) (3) Data frame sent\nI0113 06:58:40.355636 1084 log.go:181] (0x4000c0e000) (5) Data frame sent\nI0113 06:58:40.359668 1084 log.go:181] (0x4000d9ebb0) Data frame received for 3\nI0113 06:58:40.359838 1084 log.go:181] (0x4000d08500) (3) Data frame handling\nI0113 06:58:40.360015 1084 log.go:181] (0x4000d08500) (3) Data frame sent\nI0113 06:58:40.360121 1084 log.go:181] (0x4000d9ebb0) Data frame received for 3\nI0113 06:58:40.360207 1084 log.go:181] (0x4000d08500) (3) Data frame handling\nI0113 06:58:40.360303 1084 log.go:181] (0x4000d9ebb0) Data frame received for 5\nI0113 06:58:40.360408 1084 log.go:181] (0x4000c0e000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:31537/\nI0113 06:58:40.360548 1084 log.go:181] (0x4000d08500) (3) Data frame sent\nI0113 06:58:40.360732 1084 log.go:181] (0x4000c0e000) (5) Data frame sent\nI0113 06:58:40.368103 1084 log.go:181] (0x4000d9ebb0) Data frame received for 3\nI0113 06:58:40.368209 1084 log.go:181] (0x4000d08500) (3) Data frame handling\nI0113 06:58:40.368389 1084 log.go:181] (0x4000d08500) (3) Data frame sent\nI0113 06:58:40.368981 1084 log.go:181] (0x4000d9ebb0) Data frame received for 3\nI0113 06:58:40.369087 1084 log.go:181] (0x4000d08500) (3) Data frame handling\nI0113 06:58:40.369175 1084 log.go:181] (0x4000d08500) (3) Data frame sent\nI0113 06:58:40.369256 1084 log.go:181] (0x4000d9ebb0) Data frame received for 5\nI0113 06:58:40.369353 1084 log.go:181] (0x4000c0e000) (5) Data frame handling\nI0113 06:58:40.369467 1084 log.go:181] (0x4000c0e000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:31537/\nI0113 06:58:40.373453 1084 log.go:181] (0x4000d9ebb0) Data frame received for 3\nI0113 06:58:40.373555 1084 log.go:181] (0x4000d08500) (3) Data frame handling\nI0113 06:58:40.373672 1084 log.go:181] (0x4000d08500) (3) Data frame sent\nI0113 06:58:40.373882 1084 log.go:181] (0x4000d9ebb0) Data frame received for 3\nI0113 06:58:40.373976 1084 log.go:181] (0x4000d08500) (3) Data frame handling\nI0113 06:58:40.374046 1084 log.go:181] (0x4000d08500) (3) Data frame sent\nI0113 06:58:40.374149 1084 log.go:181] (0x4000d9ebb0) Data frame received for 5\nI0113 06:58:40.374304 1084 log.go:181] (0x4000c0e000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:31537/\nI0113 06:58:40.374474 1084 log.go:181] (0x4000c0e000) (5) Data frame sent\nI0113 06:58:40.378333 1084 log.go:181] (0x4000d9ebb0) Data frame received for 3\nI0113 06:58:40.378422 1084 log.go:181] (0x4000d08500) (3) Data frame handling\nI0113 06:58:40.378541 1084 log.go:181] (0x4000d08500) (3) Data frame sent\nI0113 06:58:40.379172 1084 log.go:181] (0x4000d9ebb0) Data frame received for 5\nI0113 06:58:40.379273 1084 log.go:181] (0x4000c0e000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:31537/\nI0113 06:58:40.379324 1084 log.go:181] (0x4000d9ebb0) Data frame received for 3\nI0113 06:58:40.379401 1084 log.go:181] (0x4000d08500) (3) Data frame handling\nI0113 06:58:40.379485 1084 log.go:181] (0x4000c0e000) (5) Data frame sent\nI0113 06:58:40.379569 1084 log.go:181] (0x4000d08500) (3) Data frame sent\nI0113 06:58:40.384027 1084 log.go:181] (0x4000d9ebb0) Data frame received for 3\nI0113 06:58:40.384091 1084 log.go:181] (0x4000d08500) (3) Data frame handling\nI0113 06:58:40.384178 1084 log.go:181] (0x4000d08500) (3) Data frame sent\nI0113 06:58:40.384811 1084 log.go:181] (0x4000d9ebb0) Data frame received for 5\nI0113 06:58:40.385056 1084 log.go:181] (0x4000c0e000) (5) Data frame handling\nI0113 06:58:40.385182 1084 log.go:181] (0x4000c0e000) (5) Data frame sent\n+ echo\n+ curl -q -sI0113 06:58:40.385268 1084 log.go:181] (0x4000d9ebb0) Data frame received for 5\nI0113 06:58:40.385392 1084 log.go:181] (0x4000c0e000) (5) Data frame handling\nI0113 06:58:40.385470 1084 log.go:181] (0x4000c0e000) (5) Data frame sent\n --connect-timeout 2 http://172.18.0.13:31537/\nI0113 06:58:40.385568 1084 log.go:181] (0x4000d9ebb0) Data frame received for 3\nI0113 06:58:40.385684 1084 log.go:181] (0x4000d08500) (3) Data frame handling\nI0113 06:58:40.385820 1084 log.go:181] (0x4000d08500) (3) Data frame sent\nI0113 06:58:40.389909 1084 log.go:181] (0x4000d9ebb0) Data frame received for 3\nI0113 06:58:40.390105 1084 log.go:181] (0x4000d08500) (3) Data frame handling\nI0113 06:58:40.390256 1084 log.go:181] (0x4000d08500) (3) Data frame sent\nI0113 06:58:40.390606 1084 log.go:181] (0x4000d9ebb0) Data frame received for 5\nI0113 06:58:40.390746 1084 log.go:181] (0x4000c0e000) (5) Data frame handling\nI0113 06:58:40.390957 1084 log.go:181] (0x4000d9ebb0) Data frame received for 3\nI0113 06:58:40.391097 1084 log.go:181] (0x4000d08500) (3) Data frame handling\nI0113 06:58:40.392323 1084 log.go:181] (0x4000d9ebb0) Data frame received for 1\nI0113 06:58:40.392416 1084 log.go:181] (0x4000d08460) (1) Data frame handling\nI0113 06:58:40.392495 1084 log.go:181] (0x4000d08460) (1) Data frame sent\nI0113 06:58:40.394577 1084 log.go:181] (0x4000d9ebb0) (0x4000d08460) Stream removed, broadcasting: 1\nI0113 06:58:40.398512 1084 log.go:181] (0x4000d9ebb0) Go away received\nI0113 06:58:40.401905 1084 log.go:181] (0x4000d9ebb0) (0x4000d08460) Stream removed, broadcasting: 1\nI0113 06:58:40.402290 1084 log.go:181] (0x4000d9ebb0) (0x4000d08500) Stream removed, broadcasting: 3\nI0113 06:58:40.402727 1084 log.go:181] (0x4000d9ebb0) (0x4000c0e000) Stream removed, broadcasting: 5\n" Jan 13 06:58:40.416: INFO: stdout: "\naffinity-nodeport-sxgrj\naffinity-nodeport-sxgrj\naffinity-nodeport-sxgrj\naffinity-nodeport-sxgrj\naffinity-nodeport-sxgrj\naffinity-nodeport-sxgrj\naffinity-nodeport-sxgrj\naffinity-nodeport-sxgrj\naffinity-nodeport-sxgrj\naffinity-nodeport-sxgrj\naffinity-nodeport-sxgrj\naffinity-nodeport-sxgrj\naffinity-nodeport-sxgrj\naffinity-nodeport-sxgrj\naffinity-nodeport-sxgrj\naffinity-nodeport-sxgrj" Jan 13 06:58:40.417: INFO: Received response from host: affinity-nodeport-sxgrj Jan 13 06:58:40.417: INFO: Received response from host: affinity-nodeport-sxgrj Jan 13 06:58:40.417: INFO: Received response from host: affinity-nodeport-sxgrj Jan 13 06:58:40.417: INFO: Received response from host: affinity-nodeport-sxgrj Jan 13 06:58:40.417: INFO: Received response from host: affinity-nodeport-sxgrj Jan 13 06:58:40.417: INFO: Received response from host: affinity-nodeport-sxgrj Jan 13 06:58:40.417: INFO: Received response from host: affinity-nodeport-sxgrj Jan 13 06:58:40.417: INFO: Received response from host: affinity-nodeport-sxgrj Jan 13 06:58:40.417: INFO: Received response from host: affinity-nodeport-sxgrj Jan 13 06:58:40.417: INFO: Received response from host: affinity-nodeport-sxgrj Jan 13 06:58:40.417: INFO: Received response from host: affinity-nodeport-sxgrj Jan 13 06:58:40.417: INFO: Received response from host: affinity-nodeport-sxgrj Jan 13 06:58:40.418: INFO: Received response from host: affinity-nodeport-sxgrj Jan 13 06:58:40.418: INFO: Received response from host: affinity-nodeport-sxgrj Jan 13 06:58:40.418: INFO: Received response from host: affinity-nodeport-sxgrj Jan 13 06:58:40.418: INFO: Received response from host: affinity-nodeport-sxgrj Jan 13 06:58:40.418: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-8103, will wait for the garbage collector to delete the pods Jan 13 06:58:40.531: INFO: Deleting ReplicationController affinity-nodeport took: 9.1538ms Jan 13 06:58:40.732: INFO: Terminating ReplicationController affinity-nodeport pods took: 200.78614ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 06:59:20.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8103" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:62.548 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":309,"completed":89,"skipped":1581,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 06:59:20.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 06:59:20.477: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jan 13 06:59:32.237: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1389 --namespace=crd-publish-openapi-1389 create -f -' Jan 13 06:59:38.115: INFO: stderr: "" Jan 13 06:59:38.115: INFO: stdout: "e2e-test-crd-publish-openapi-2046-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Jan 13 06:59:38.116: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1389 --namespace=crd-publish-openapi-1389 delete e2e-test-crd-publish-openapi-2046-crds test-cr' Jan 13 06:59:39.486: INFO: stderr: "" Jan 13 06:59:39.486: INFO: stdout: "e2e-test-crd-publish-openapi-2046-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Jan 13 06:59:39.487: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1389 --namespace=crd-publish-openapi-1389 apply -f -' Jan 13 06:59:42.015: INFO: stderr: "" Jan 13 06:59:42.015: INFO: stdout: "e2e-test-crd-publish-openapi-2046-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Jan 13 06:59:42.015: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1389 --namespace=crd-publish-openapi-1389 delete e2e-test-crd-publish-openapi-2046-crds test-cr' Jan 13 06:59:43.347: INFO: stderr: "" Jan 13 06:59:43.347: INFO: stdout: "e2e-test-crd-publish-openapi-2046-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Jan 13 06:59:43.348: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1389 explain e2e-test-crd-publish-openapi-2046-crds' Jan 13 06:59:46.615: INFO: stderr: "" Jan 13 06:59:46.616: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2046-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<>\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:00:09.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1389" for this suite. • [SLOW TEST:49.053 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":309,"completed":90,"skipped":1588,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:00:09.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:85 [It] should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a Deployment STEP: waiting for Deployment to be created STEP: waiting for all Replicas to be Ready Jan 13 07:00:09.561: INFO: observed Deployment test-deployment in namespace deployment-4719 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 13 07:00:09.561: INFO: observed Deployment test-deployment in namespace deployment-4719 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 13 07:00:09.565: INFO: observed Deployment test-deployment in namespace deployment-4719 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 13 07:00:09.566: INFO: observed Deployment test-deployment in namespace deployment-4719 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 13 07:00:09.656: INFO: observed Deployment test-deployment in namespace deployment-4719 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 13 07:00:09.657: INFO: observed Deployment test-deployment in namespace deployment-4719 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 13 07:00:09.781: INFO: observed Deployment test-deployment in namespace deployment-4719 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 13 07:00:09.781: INFO: observed Deployment test-deployment in namespace deployment-4719 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 13 07:00:13.975: INFO: observed Deployment test-deployment in namespace deployment-4719 with ReadyReplicas 1 and labels map[test-deployment-static:true] Jan 13 07:00:13.975: INFO: observed Deployment test-deployment in namespace deployment-4719 with ReadyReplicas 1 and labels map[test-deployment-static:true] Jan 13 07:00:14.111: INFO: observed Deployment test-deployment in namespace deployment-4719 with ReadyReplicas 2 and labels map[test-deployment-static:true] STEP: patching the Deployment Jan 13 07:00:14.161: INFO: observed event type ADDED STEP: waiting for Replicas to scale Jan 13 07:00:14.169: INFO: observed Deployment test-deployment in namespace deployment-4719 with ReadyReplicas 0 Jan 13 07:00:14.169: INFO: observed Deployment test-deployment in namespace deployment-4719 with ReadyReplicas 0 Jan 13 07:00:14.169: INFO: observed Deployment test-deployment in namespace deployment-4719 with ReadyReplicas 0 Jan 13 07:00:14.169: INFO: observed Deployment test-deployment in namespace deployment-4719 with ReadyReplicas 0 Jan 13 07:00:14.170: INFO: observed Deployment test-deployment in namespace deployment-4719 with ReadyReplicas 0 Jan 13 07:00:14.170: INFO: observed Deployment test-deployment in namespace deployment-4719 with ReadyReplicas 0 Jan 13 07:00:14.171: INFO: observed Deployment test-deployment in namespace deployment-4719 with ReadyReplicas 0 Jan 13 07:00:14.171: INFO: observed Deployment test-deployment in namespace deployment-4719 with ReadyReplicas 0 Jan 13 07:00:14.171: INFO: observed Deployment test-deployment in namespace deployment-4719 with ReadyReplicas 1 Jan 13 07:00:14.171: INFO: observed Deployment test-deployment in namespace deployment-4719 with ReadyReplicas 1 Jan 13 07:00:14.172: INFO: observed Deployment test-deployment in namespace deployment-4719 with ReadyReplicas 2 Jan 13 07:00:14.172: INFO: observed Deployment test-deployment in namespace deployment-4719 with ReadyReplicas 2 Jan 13 07:00:14.173: INFO: observed Deployment test-deployment in namespace deployment-4719 with ReadyReplicas 2 Jan 13 07:00:14.173: INFO: observed Deployment test-deployment in namespace deployment-4719 with ReadyReplicas 2 Jan 13 07:00:14.183: INFO: observed Deployment test-deployment in namespace deployment-4719 with ReadyReplicas 2 Jan 13 07:00:14.183: INFO: observed Deployment test-deployment in namespace deployment-4719 with ReadyReplicas 2 Jan 13 07:00:14.247: INFO: observed Deployment test-deployment in namespace deployment-4719 with ReadyReplicas 2 Jan 13 07:00:14.247: INFO: observed Deployment test-deployment in namespace deployment-4719 with ReadyReplicas 2 Jan 13 07:00:14.349: INFO: observed Deployment test-deployment in namespace deployment-4719 with ReadyReplicas 2 Jan 13 07:00:14.349: INFO: observed Deployment test-deployment in namespace deployment-4719 with ReadyReplicas 2 Jan 13 07:00:14.379: INFO: observed Deployment test-deployment in namespace deployment-4719 with ReadyReplicas 1 STEP: listing Deployments Jan 13 07:00:14.427: INFO: Found test-deployment with labels: map[test-deployment:patched test-deployment-static:true] STEP: updating the Deployment Jan 13 07:00:14.642: INFO: observed Deployment test-deployment in namespace deployment-4719 with ReadyReplicas 1 STEP: fetching the DeploymentStatus Jan 13 07:00:14.779: INFO: observed Deployment test-deployment in namespace deployment-4719 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jan 13 07:00:14.798: INFO: observed Deployment test-deployment in namespace deployment-4719 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jan 13 07:00:14.997: INFO: observed Deployment test-deployment in namespace deployment-4719 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jan 13 07:00:15.363: INFO: observed Deployment test-deployment in namespace deployment-4719 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jan 13 07:00:15.418: INFO: observed Deployment test-deployment in namespace deployment-4719 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jan 13 07:00:15.435: INFO: observed Deployment test-deployment in namespace deployment-4719 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jan 13 07:00:15.598: INFO: observed Deployment test-deployment in namespace deployment-4719 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jan 13 07:00:16.288: INFO: observed Deployment test-deployment in namespace deployment-4719 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] STEP: patching the DeploymentStatus STEP: fetching the DeploymentStatus Jan 13 07:00:20.304: INFO: observed Deployment test-deployment in namespace deployment-4719 with ReadyReplicas 1 Jan 13 07:00:20.304: INFO: observed Deployment test-deployment in namespace deployment-4719 with ReadyReplicas 1 Jan 13 07:00:20.304: INFO: observed Deployment test-deployment in namespace deployment-4719 with ReadyReplicas 1 Jan 13 07:00:20.305: INFO: observed Deployment test-deployment in namespace deployment-4719 with ReadyReplicas 1 Jan 13 07:00:20.305: INFO: observed Deployment test-deployment in namespace deployment-4719 with ReadyReplicas 1 Jan 13 07:00:20.305: INFO: observed Deployment test-deployment in namespace deployment-4719 with ReadyReplicas 1 Jan 13 07:00:20.306: INFO: observed Deployment test-deployment in namespace deployment-4719 with ReadyReplicas 1 Jan 13 07:00:20.306: INFO: observed Deployment test-deployment in namespace deployment-4719 with ReadyReplicas 1 STEP: deleting the Deployment Jan 13 07:00:20.558: INFO: observed event type MODIFIED Jan 13 07:00:20.558: INFO: observed event type MODIFIED Jan 13 07:00:20.559: INFO: observed event type MODIFIED Jan 13 07:00:20.559: INFO: observed event type MODIFIED Jan 13 07:00:20.559: INFO: observed event type MODIFIED Jan 13 07:00:20.560: INFO: observed event type MODIFIED Jan 13 07:00:20.560: INFO: observed event type MODIFIED Jan 13 07:00:20.560: INFO: observed event type MODIFIED Jan 13 07:00:20.561: INFO: observed event type MODIFIED Jan 13 07:00:20.561: INFO: observed event type MODIFIED Jan 13 07:00:20.561: INFO: observed event type MODIFIED [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:79 Jan 13 07:00:20.616: INFO: Log out all the ReplicaSets if there is no deployment created Jan 13 07:00:20.781: INFO: ReplicaSet "test-deployment-768947d6f5": &ReplicaSet{ObjectMeta:{test-deployment-768947d6f5 deployment-4719 dbe916a8-0e75-48d4-876d-20e168137ea7 495452 3 2021-01-13 07:00:14 +0000 UTC map[pod-template-hash:768947d6f5 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:3] [{apps/v1 Deployment test-deployment e0fb8cc4-6141-4513-ba35-55cbb71dfda3 0x4004b024f7 0x4004b024f8}] [] [{kube-controller-manager Update apps/v1 2021-01-13 07:00:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e0fb8cc4-6141-4513-ba35-55cbb71dfda3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 768947d6f5,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:768947d6f5 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0x4004b025b0 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:2,FullyLabeledReplicas:2,ObservedGeneration:3,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 13 07:00:20.825: INFO: pod: "test-deployment-768947d6f5-cdbx5": &Pod{ObjectMeta:{test-deployment-768947d6f5-cdbx5 test-deployment-768947d6f5- deployment-4719 fb812491-08dc-47dc-9770-e5dfda1aa511 495457 0 2021-01-13 07:00:20 +0000 UTC map[pod-template-hash:768947d6f5 test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-768947d6f5 dbe916a8-0e75-48d4-876d-20e168137ea7 0x4004b02d47 0x4004b02d48}] [] [{kube-controller-manager Update v1 2021-01-13 07:00:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dbe916a8-0e75-48d4-876d-20e168137ea7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-13 07:00:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-sscmn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-sscmn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-sscmn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:00:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:00:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [test-deployment],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:00:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [test-deployment],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:00:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2021-01-13 07:00:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 13 07:00:20.826: INFO: pod: "test-deployment-768947d6f5-lq8zx": &Pod{ObjectMeta:{test-deployment-768947d6f5-lq8zx test-deployment-768947d6f5- deployment-4719 eb6f3f2e-f700-4035-bfa9-8e4c52715346 495435 0 2021-01-13 07:00:15 +0000 UTC map[pod-template-hash:768947d6f5 test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-768947d6f5 dbe916a8-0e75-48d4-876d-20e168137ea7 0x4004b03027 0x4004b03028}] [] [{kube-controller-manager Update v1 2021-01-13 07:00:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dbe916a8-0e75-48d4-876d-20e168137ea7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-13 07:00:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.217\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-sscmn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-sscmn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-sscmn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:00:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:00:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:00:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:00:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.217,StartTime:2021-01-13 07:00:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-13 07:00:19 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://308464c2ca71b23e2d2093f60a641ef4d49a8a48abf95fd03e775c7bc74991db,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.217,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 13 07:00:20.828: INFO: ReplicaSet "test-deployment-7c65d4bcf9": &ReplicaSet{ObjectMeta:{test-deployment-7c65d4bcf9 deployment-4719 391ea742-5f4e-4288-a907-7588cc4a1600 495453 4 2021-01-13 07:00:14 +0000 UTC map[pod-template-hash:7c65d4bcf9 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-deployment e0fb8cc4-6141-4513-ba35-55cbb71dfda3 0x4004b02667 0x4004b02668}] [] [{kube-controller-manager Update apps/v1 2021-01-13 07:00:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e0fb8cc4-6141-4513-ba35-55cbb71dfda3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 7c65d4bcf9,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:7c65d4bcf9 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/pause:3.2 [/bin/sleep 100000] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0x4004b027a8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:4,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 13 07:00:20.835: INFO: ReplicaSet "test-deployment-8b6954bfb": &ReplicaSet{ObjectMeta:{test-deployment-8b6954bfb deployment-4719 a2408d8e-78bd-49ad-ae0b-fc558ccec09a 495375 2 2021-01-13 07:00:09 +0000 UTC map[pod-template-hash:8b6954bfb test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment e0fb8cc4-6141-4513-ba35-55cbb71dfda3 0x4004b02817 0x4004b02818}] [] [{kube-controller-manager Update apps/v1 2021-01-13 07:00:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e0fb8cc4-6141-4513-ba35-55cbb71dfda3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 8b6954bfb,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:8b6954bfb test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0x4004b02890 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 13 07:00:20.843: INFO: pod: "test-deployment-8b6954bfb-ccdz5": &Pod{ObjectMeta:{test-deployment-8b6954bfb-ccdz5 test-deployment-8b6954bfb- deployment-4719 65761ea7-e945-4697-95fe-208d35e39149 495343 0 2021-01-13 07:00:09 +0000 UTC map[pod-template-hash:8b6954bfb test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-8b6954bfb a2408d8e-78bd-49ad-ae0b-fc558ccec09a 0x40049a7ce7 0x40049a7ce8}] [] [{kube-controller-manager Update v1 2021-01-13 07:00:09 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a2408d8e-78bd-49ad-ae0b-fc558ccec09a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-13 07:00:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.40\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-sscmn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-sscmn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-sscmn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:00:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:00:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:00:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:00:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.1.40,StartTime:2021-01-13 07:00:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-13 07:00:13 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a,ContainerID:containerd://f7734dfb79c9d338cd22a2e9fb27ac9c58f96b01b637f85e0d5e4435cd44e1d6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.40,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:00:20.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4719" for this suite. • [SLOW TEST:11.444 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":309,"completed":91,"skipped":1596,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:00:20.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 07:00:21.191: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:00:23.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5118" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":309,"completed":92,"skipped":1603,"failed":0} ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:00:23.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: set up a multi version CRD Jan 13 07:00:24.020: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:02:28.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-383" for this suite. • [SLOW TEST:125.674 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":309,"completed":93,"skipped":1603,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:02:28.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 13 07:02:34.453: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 13 07:02:36.468: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746118154, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746118154, loc:(*time.Location)(0x7089440)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746118154, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746118154, loc:(*time.Location)(0x7089440)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 13 07:02:39.541: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:02:39.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5944" for this suite. STEP: Destroying namespace "webhook-5944-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:10.821 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":309,"completed":94,"skipped":1610,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should test the lifecycle of an Endpoint [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:02:39.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should test the lifecycle of an Endpoint [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating an Endpoint STEP: waiting for available Endpoint STEP: listing all Endpoints STEP: updating the Endpoint STEP: fetching the Endpoint STEP: patching the Endpoint STEP: fetching the Endpoint STEP: deleting the Endpoint by Collection STEP: waiting for Endpoint deletion STEP: fetching the Endpoint [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:02:39.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5629" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 •{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":309,"completed":95,"skipped":1640,"failed":0} SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:02:40.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating pod pod-subpath-test-secret-g474 STEP: Creating a pod to test atomic-volume-subpath Jan 13 07:02:40.136: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-g474" in namespace "subpath-3487" to be "Succeeded or Failed" Jan 13 07:02:40.153: INFO: Pod "pod-subpath-test-secret-g474": Phase="Pending", Reason="", readiness=false. Elapsed: 16.786249ms Jan 13 07:02:42.162: INFO: Pod "pod-subpath-test-secret-g474": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025578628s Jan 13 07:02:44.170: INFO: Pod "pod-subpath-test-secret-g474": Phase="Running", Reason="", readiness=true. Elapsed: 4.034100724s Jan 13 07:02:46.178: INFO: Pod "pod-subpath-test-secret-g474": Phase="Running", Reason="", readiness=true. Elapsed: 6.041758938s Jan 13 07:02:48.186: INFO: Pod "pod-subpath-test-secret-g474": Phase="Running", Reason="", readiness=true. Elapsed: 8.050200752s Jan 13 07:02:50.195: INFO: Pod "pod-subpath-test-secret-g474": Phase="Running", Reason="", readiness=true. Elapsed: 10.058765785s Jan 13 07:02:52.204: INFO: Pod "pod-subpath-test-secret-g474": Phase="Running", Reason="", readiness=true. Elapsed: 12.067941659s Jan 13 07:02:54.213: INFO: Pod "pod-subpath-test-secret-g474": Phase="Running", Reason="", readiness=true. Elapsed: 14.076871248s Jan 13 07:02:56.221: INFO: Pod "pod-subpath-test-secret-g474": Phase="Running", Reason="", readiness=true. Elapsed: 16.085090943s Jan 13 07:02:58.229: INFO: Pod "pod-subpath-test-secret-g474": Phase="Running", Reason="", readiness=true. Elapsed: 18.092687643s Jan 13 07:03:00.238: INFO: Pod "pod-subpath-test-secret-g474": Phase="Running", Reason="", readiness=true. Elapsed: 20.10134484s Jan 13 07:03:02.246: INFO: Pod "pod-subpath-test-secret-g474": Phase="Running", Reason="", readiness=true. Elapsed: 22.110019546s Jan 13 07:03:04.253: INFO: Pod "pod-subpath-test-secret-g474": Phase="Running", Reason="", readiness=true. Elapsed: 24.116368613s Jan 13 07:03:06.261: INFO: Pod "pod-subpath-test-secret-g474": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.125061895s STEP: Saw pod success Jan 13 07:03:06.262: INFO: Pod "pod-subpath-test-secret-g474" satisfied condition "Succeeded or Failed" Jan 13 07:03:06.266: INFO: Trying to get logs from node leguer-worker2 pod pod-subpath-test-secret-g474 container test-container-subpath-secret-g474: STEP: delete the pod Jan 13 07:03:06.326: INFO: Waiting for pod pod-subpath-test-secret-g474 to disappear Jan 13 07:03:06.336: INFO: Pod pod-subpath-test-secret-g474 no longer exists STEP: Deleting pod pod-subpath-test-secret-g474 Jan 13 07:03:06.336: INFO: Deleting pod "pod-subpath-test-secret-g474" in namespace "subpath-3487" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:03:06.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3487" for this suite. • [SLOW TEST:26.342 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":309,"completed":96,"skipped":1643,"failed":0} SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:03:06.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:03:10.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4605" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":309,"completed":97,"skipped":1645,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:03:10.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:03:15.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5069" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":309,"completed":98,"skipped":1653,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:03:15.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Jan 13 07:03:15.377: INFO: Waiting up to 5m0s for pod "downwardapi-volume-879fbc15-12b1-4bd7-acf4-06a76f38417a" in namespace "downward-api-3883" to be "Succeeded or Failed" Jan 13 07:03:15.400: INFO: Pod "downwardapi-volume-879fbc15-12b1-4bd7-acf4-06a76f38417a": Phase="Pending", Reason="", readiness=false. Elapsed: 23.641341ms Jan 13 07:03:18.445: INFO: Pod "downwardapi-volume-879fbc15-12b1-4bd7-acf4-06a76f38417a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.068351229s Jan 13 07:03:20.480: INFO: Pod "downwardapi-volume-879fbc15-12b1-4bd7-acf4-06a76f38417a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 5.103306044s STEP: Saw pod success Jan 13 07:03:20.481: INFO: Pod "downwardapi-volume-879fbc15-12b1-4bd7-acf4-06a76f38417a" satisfied condition "Succeeded or Failed" Jan 13 07:03:20.494: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-879fbc15-12b1-4bd7-acf4-06a76f38417a container client-container: STEP: delete the pod Jan 13 07:03:20.542: INFO: Waiting for pod downwardapi-volume-879fbc15-12b1-4bd7-acf4-06a76f38417a to disappear Jan 13 07:03:20.594: INFO: Pod downwardapi-volume-879fbc15-12b1-4bd7-acf4-06a76f38417a no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:03:20.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3883" for this suite. • [SLOW TEST:5.346 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":309,"completed":99,"skipped":1661,"failed":0} SSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:03:20.609: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a service nodeport-service with the type=NodePort in namespace services-1603 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-1603 STEP: creating replication controller externalsvc in namespace services-1603 I0113 07:03:21.157885 10 runners.go:190] Created replication controller with name: externalsvc, namespace: services-1603, replica count: 2 I0113 07:03:24.209406 10 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0113 07:03:27.210221 10 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Jan 13 07:03:27.280: INFO: Creating new exec pod Jan 13 07:03:31.344: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-1603 exec execpod7qtzr -- /bin/sh -x -c nslookup nodeport-service.services-1603.svc.cluster.local' Jan 13 07:03:32.902: INFO: stderr: "I0113 07:03:32.771535 1204 log.go:181] (0x400003a420) (0x4000738000) Create stream\nI0113 07:03:32.774038 1204 log.go:181] (0x400003a420) (0x4000738000) Stream added, broadcasting: 1\nI0113 07:03:32.785178 1204 log.go:181] (0x400003a420) Reply frame received for 1\nI0113 07:03:32.785764 1204 log.go:181] (0x400003a420) (0x4000397720) Create stream\nI0113 07:03:32.785828 1204 log.go:181] (0x400003a420) (0x4000397720) Stream added, broadcasting: 3\nI0113 07:03:32.786987 1204 log.go:181] (0x400003a420) Reply frame received for 3\nI0113 07:03:32.787275 1204 log.go:181] (0x400003a420) (0x4000397c20) Create stream\nI0113 07:03:32.787333 1204 log.go:181] (0x400003a420) (0x4000397c20) Stream added, broadcasting: 5\nI0113 07:03:32.788478 1204 log.go:181] (0x400003a420) Reply frame received for 5\nI0113 07:03:32.864456 1204 log.go:181] (0x400003a420) Data frame received for 5\nI0113 07:03:32.864793 1204 log.go:181] (0x4000397c20) (5) Data frame handling\nI0113 07:03:32.865624 1204 log.go:181] (0x4000397c20) (5) Data frame sent\n+ nslookup nodeport-service.services-1603.svc.cluster.local\nI0113 07:03:32.881649 1204 log.go:181] (0x400003a420) Data frame received for 3\nI0113 07:03:32.881782 1204 log.go:181] (0x4000397720) (3) Data frame handling\nI0113 07:03:32.881898 1204 log.go:181] (0x4000397720) (3) Data frame sent\nI0113 07:03:32.882593 1204 log.go:181] (0x400003a420) Data frame received for 3\nI0113 07:03:32.882731 1204 log.go:181] (0x4000397720) (3) Data frame handling\nI0113 07:03:32.882892 1204 log.go:181] (0x4000397720) (3) Data frame sent\nI0113 07:03:32.883367 1204 log.go:181] (0x400003a420) Data frame received for 5\nI0113 07:03:32.883484 1204 log.go:181] (0x4000397c20) (5) Data frame handling\nI0113 07:03:32.883973 1204 log.go:181] (0x400003a420) Data frame received for 3\nI0113 07:03:32.884098 1204 log.go:181] (0x4000397720) (3) Data frame handling\nI0113 07:03:32.885728 1204 log.go:181] (0x400003a420) Data frame received for 1\nI0113 07:03:32.885803 1204 log.go:181] (0x4000738000) (1) Data frame handling\nI0113 07:03:32.885872 1204 log.go:181] (0x4000738000) (1) Data frame sent\nI0113 07:03:32.887342 1204 log.go:181] (0x400003a420) (0x4000738000) Stream removed, broadcasting: 1\nI0113 07:03:32.889676 1204 log.go:181] (0x400003a420) Go away received\nI0113 07:03:32.893795 1204 log.go:181] (0x400003a420) (0x4000738000) Stream removed, broadcasting: 1\nI0113 07:03:32.894166 1204 log.go:181] (0x400003a420) (0x4000397720) Stream removed, broadcasting: 3\nI0113 07:03:32.894425 1204 log.go:181] (0x400003a420) (0x4000397c20) Stream removed, broadcasting: 5\n" Jan 13 07:03:32.903: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-1603.svc.cluster.local\tcanonical name = externalsvc.services-1603.svc.cluster.local.\nName:\texternalsvc.services-1603.svc.cluster.local\nAddress: 10.96.129.174\n\n" STEP: deleting ReplicationController externalsvc in namespace services-1603, will wait for the garbage collector to delete the pods Jan 13 07:03:32.968: INFO: Deleting ReplicationController externalsvc took: 7.070194ms Jan 13 07:03:33.569: INFO: Terminating ReplicationController externalsvc pods took: 600.853239ms Jan 13 07:04:20.203: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:04:20.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1603" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:59.654 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":309,"completed":100,"skipped":1665,"failed":0} SSS ------------------------------ [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:04:20.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should test the lifecycle of a ReplicationController [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a ReplicationController STEP: waiting for RC to be added STEP: waiting for available Replicas STEP: patching ReplicationController STEP: waiting for RC to be modified STEP: patching ReplicationController status STEP: waiting for RC to be modified STEP: waiting for available Replicas STEP: fetching ReplicationController status STEP: patching ReplicationController scale STEP: waiting for RC to be modified STEP: waiting for ReplicationController's scale to be the max amount STEP: fetching ReplicationController; ensuring that it's patched STEP: updating ReplicationController status STEP: waiting for RC to be modified STEP: listing all ReplicationControllers STEP: checking that ReplicationController has expected values STEP: deleting ReplicationControllers by collection STEP: waiting for ReplicationController to have a DELETED watchEvent [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:04:27.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2920" for this suite. • [SLOW TEST:7.634 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should test the lifecycle of a ReplicationController [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":309,"completed":101,"skipped":1668,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:04:27.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir 0666 on tmpfs Jan 13 07:04:28.028: INFO: Waiting up to 5m0s for pod "pod-60b45c79-55f9-472c-897f-29d3e070e44e" in namespace "emptydir-2148" to be "Succeeded or Failed" Jan 13 07:04:28.042: INFO: Pod "pod-60b45c79-55f9-472c-897f-29d3e070e44e": Phase="Pending", Reason="", readiness=false. Elapsed: 13.872654ms Jan 13 07:04:30.049: INFO: Pod "pod-60b45c79-55f9-472c-897f-29d3e070e44e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020607254s Jan 13 07:04:32.076: INFO: Pod "pod-60b45c79-55f9-472c-897f-29d3e070e44e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047681754s STEP: Saw pod success Jan 13 07:04:32.076: INFO: Pod "pod-60b45c79-55f9-472c-897f-29d3e070e44e" satisfied condition "Succeeded or Failed" Jan 13 07:04:32.140: INFO: Trying to get logs from node leguer-worker pod pod-60b45c79-55f9-472c-897f-29d3e070e44e container test-container: STEP: delete the pod Jan 13 07:04:32.203: INFO: Waiting for pod pod-60b45c79-55f9-472c-897f-29d3e070e44e to disappear Jan 13 07:04:32.208: INFO: Pod pod-60b45c79-55f9-472c-897f-29d3e070e44e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:04:32.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2148" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":102,"skipped":1685,"failed":0} ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:04:32.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward api env vars Jan 13 07:04:32.335: INFO: Waiting up to 5m0s for pod "downward-api-2645f5c6-0734-459b-8cb1-85cf2839884d" in namespace "downward-api-1099" to be "Succeeded or Failed" Jan 13 07:04:32.358: INFO: Pod "downward-api-2645f5c6-0734-459b-8cb1-85cf2839884d": Phase="Pending", Reason="", readiness=false. Elapsed: 22.539612ms Jan 13 07:04:34.444: INFO: Pod "downward-api-2645f5c6-0734-459b-8cb1-85cf2839884d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108970751s Jan 13 07:04:36.454: INFO: Pod "downward-api-2645f5c6-0734-459b-8cb1-85cf2839884d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118062709s Jan 13 07:04:38.463: INFO: Pod "downward-api-2645f5c6-0734-459b-8cb1-85cf2839884d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.127312157s STEP: Saw pod success Jan 13 07:04:38.463: INFO: Pod "downward-api-2645f5c6-0734-459b-8cb1-85cf2839884d" satisfied condition "Succeeded or Failed" Jan 13 07:04:38.468: INFO: Trying to get logs from node leguer-worker2 pod downward-api-2645f5c6-0734-459b-8cb1-85cf2839884d container dapi-container: STEP: delete the pod Jan 13 07:04:38.518: INFO: Waiting for pod downward-api-2645f5c6-0734-459b-8cb1-85cf2839884d to disappear Jan 13 07:04:38.524: INFO: Pod downward-api-2645f5c6-0734-459b-8cb1-85cf2839884d no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:04:38.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1099" for this suite. • [SLOW TEST:6.327 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":309,"completed":103,"skipped":1685,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:04:38.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Jan 13 07:04:38.643: INFO: >>> kubeConfig: /root/.kube/config Jan 13 07:05:01.361: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:06:31.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5342" for this suite. • [SLOW TEST:113.302 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":309,"completed":104,"skipped":1705,"failed":0} SSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:06:31.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating secret with name secret-test-map-257ec4b2-827c-48ea-97cc-0c4bfac6a393 STEP: Creating a pod to test consume secrets Jan 13 07:06:31.963: INFO: Waiting up to 5m0s for pod "pod-secrets-32683241-8486-4d62-bc1c-59cb574c6a37" in namespace "secrets-2413" to be "Succeeded or Failed" Jan 13 07:06:32.003: INFO: Pod "pod-secrets-32683241-8486-4d62-bc1c-59cb574c6a37": Phase="Pending", Reason="", readiness=false. Elapsed: 39.869236ms Jan 13 07:06:34.012: INFO: Pod "pod-secrets-32683241-8486-4d62-bc1c-59cb574c6a37": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048184233s Jan 13 07:06:36.018: INFO: Pod "pod-secrets-32683241-8486-4d62-bc1c-59cb574c6a37": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054881437s STEP: Saw pod success Jan 13 07:06:36.018: INFO: Pod "pod-secrets-32683241-8486-4d62-bc1c-59cb574c6a37" satisfied condition "Succeeded or Failed" Jan 13 07:06:36.022: INFO: Trying to get logs from node leguer-worker2 pod pod-secrets-32683241-8486-4d62-bc1c-59cb574c6a37 container secret-volume-test: STEP: delete the pod Jan 13 07:06:36.167: INFO: Waiting for pod pod-secrets-32683241-8486-4d62-bc1c-59cb574c6a37 to disappear Jan 13 07:06:36.173: INFO: Pod pod-secrets-32683241-8486-4d62-bc1c-59cb574c6a37 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:06:36.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2413" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":105,"skipped":1712,"failed":0} SS ------------------------------ [sig-network] Ingress API should support creating Ingress API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:06:36.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingress STEP: Waiting for a default service account to be provisioned in namespace [It] should support creating Ingress API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Jan 13 07:06:36.424: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Jan 13 07:06:36.433: INFO: starting watch STEP: patching STEP: updating Jan 13 07:06:36.461: INFO: waiting for watch events with expected annotations Jan 13 07:06:36.462: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: get /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:06:36.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingress-4548" for this suite. •{"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":309,"completed":106,"skipped":1714,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:06:36.629: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Jan 13 07:06:36.807: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-5746 9297a29f-f44e-4032-ab8b-ad209b5a259f 496844 0 2021-01-13 07:06:36 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2021-01-13 07:06:36 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 13 07:06:36.810: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-5746 9297a29f-f44e-4032-ab8b-ad209b5a259f 496845 0 2021-01-13 07:06:36 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2021-01-13 07:06:36 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:06:36.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5746" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":309,"completed":107,"skipped":1730,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:06:36.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating service in namespace services-3518 STEP: creating service affinity-clusterip in namespace services-3518 STEP: creating replication controller affinity-clusterip in namespace services-3518 I0113 07:06:36.965481 10 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-3518, replica count: 3 I0113 07:06:40.017107 10 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0113 07:06:43.017797 10 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 13 07:06:43.028: INFO: Creating new exec pod Jan 13 07:06:48.054: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-3518 exec execpod-affinityv7n8p -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Jan 13 07:06:49.675: INFO: stderr: "I0113 07:06:49.547475 1224 log.go:181] (0x40000c40b0) (0x4000bf21e0) Create stream\nI0113 07:06:49.553025 1224 log.go:181] (0x40000c40b0) (0x4000bf21e0) Stream added, broadcasting: 1\nI0113 07:06:49.566291 1224 log.go:181] (0x40000c40b0) Reply frame received for 1\nI0113 07:06:49.567030 1224 log.go:181] (0x40000c40b0) (0x4000448a00) Create stream\nI0113 07:06:49.567113 1224 log.go:181] (0x40000c40b0) (0x4000448a00) Stream added, broadcasting: 3\nI0113 07:06:49.569035 1224 log.go:181] (0x40000c40b0) Reply frame received for 3\nI0113 07:06:49.569306 1224 log.go:181] (0x40000c40b0) (0x4000564140) Create stream\nI0113 07:06:49.569367 1224 log.go:181] (0x40000c40b0) (0x4000564140) Stream added, broadcasting: 5\nI0113 07:06:49.571391 1224 log.go:181] (0x40000c40b0) Reply frame received for 5\nI0113 07:06:49.654664 1224 log.go:181] (0x40000c40b0) Data frame received for 3\nI0113 07:06:49.655144 1224 log.go:181] (0x40000c40b0) Data frame received for 5\nI0113 07:06:49.655346 1224 log.go:181] (0x4000564140) (5) Data frame handling\nI0113 07:06:49.655623 1224 log.go:181] (0x4000448a00) (3) Data frame handling\nI0113 07:06:49.656230 1224 log.go:181] (0x40000c40b0) Data frame received for 1\nI0113 07:06:49.656334 1224 log.go:181] (0x4000bf21e0) (1) Data frame handling\nI0113 07:06:49.658997 1224 log.go:181] (0x4000564140) (5) Data frame sent\nI0113 07:06:49.659116 1224 log.go:181] (0x4000bf21e0) (1) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip 80\nI0113 07:06:49.659820 1224 log.go:181] (0x40000c40b0) Data frame received for 5\nI0113 07:06:49.659907 1224 log.go:181] (0x4000564140) (5) Data frame handling\nI0113 07:06:49.659997 1224 log.go:181] (0x4000564140) (5) Data frame sent\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\nI0113 07:06:49.660070 1224 log.go:181] (0x40000c40b0) Data frame received for 5\nI0113 07:06:49.661557 1224 log.go:181] (0x40000c40b0) (0x4000bf21e0) Stream removed, broadcasting: 1\nI0113 07:06:49.662204 1224 log.go:181] (0x4000564140) (5) Data frame handling\nI0113 07:06:49.663766 1224 log.go:181] (0x40000c40b0) Go away received\nI0113 07:06:49.667148 1224 log.go:181] (0x40000c40b0) (0x4000bf21e0) Stream removed, broadcasting: 1\nI0113 07:06:49.667454 1224 log.go:181] (0x40000c40b0) (0x4000448a00) Stream removed, broadcasting: 3\nI0113 07:06:49.667645 1224 log.go:181] (0x40000c40b0) (0x4000564140) Stream removed, broadcasting: 5\n" Jan 13 07:06:49.676: INFO: stdout: "" Jan 13 07:06:49.681: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-3518 exec execpod-affinityv7n8p -- /bin/sh -x -c nc -zv -t -w 2 10.96.172.190 80' Jan 13 07:06:51.273: INFO: stderr: "I0113 07:06:51.156685 1244 log.go:181] (0x400003a0b0) (0x4000144000) Create stream\nI0113 07:06:51.166046 1244 log.go:181] (0x400003a0b0) (0x4000144000) Stream added, broadcasting: 1\nI0113 07:06:51.176255 1244 log.go:181] (0x400003a0b0) Reply frame received for 1\nI0113 07:06:51.176771 1244 log.go:181] (0x400003a0b0) (0x40001440a0) Create stream\nI0113 07:06:51.176908 1244 log.go:181] (0x400003a0b0) (0x40001440a0) Stream added, broadcasting: 3\nI0113 07:06:51.178219 1244 log.go:181] (0x400003a0b0) Reply frame received for 3\nI0113 07:06:51.178436 1244 log.go:181] (0x400003a0b0) (0x4000144140) Create stream\nI0113 07:06:51.178489 1244 log.go:181] (0x400003a0b0) (0x4000144140) Stream added, broadcasting: 5\nI0113 07:06:51.179481 1244 log.go:181] (0x400003a0b0) Reply frame received for 5\nI0113 07:06:51.255141 1244 log.go:181] (0x400003a0b0) Data frame received for 3\nI0113 07:06:51.255671 1244 log.go:181] (0x400003a0b0) Data frame received for 5\nI0113 07:06:51.256339 1244 log.go:181] (0x40001440a0) (3) Data frame handling\nI0113 07:06:51.256631 1244 log.go:181] (0x400003a0b0) Data frame received for 1\nI0113 07:06:51.256809 1244 log.go:181] (0x4000144000) (1) Data frame handling\nI0113 07:06:51.257216 1244 log.go:181] (0x4000144140) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.172.190 80\nConnection to 10.96.172.190 80 port [tcp/http] succeeded!\nI0113 07:06:51.258902 1244 log.go:181] (0x4000144000) (1) Data frame sent\nI0113 07:06:51.259788 1244 log.go:181] (0x4000144140) (5) Data frame sent\nI0113 07:06:51.260612 1244 log.go:181] (0x400003a0b0) Data frame received for 5\nI0113 07:06:51.260693 1244 log.go:181] (0x4000144140) (5) Data frame handling\nI0113 07:06:51.261237 1244 log.go:181] (0x400003a0b0) (0x4000144000) Stream removed, broadcasting: 1\nI0113 07:06:51.263356 1244 log.go:181] (0x400003a0b0) Go away received\nI0113 07:06:51.265633 1244 log.go:181] (0x400003a0b0) (0x4000144000) Stream removed, broadcasting: 1\nI0113 07:06:51.265845 1244 log.go:181] (0x400003a0b0) (0x40001440a0) Stream removed, broadcasting: 3\nI0113 07:06:51.265999 1244 log.go:181] (0x400003a0b0) (0x4000144140) Stream removed, broadcasting: 5\n" Jan 13 07:06:51.274: INFO: stdout: "" Jan 13 07:06:51.274: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-3518 exec execpod-affinityv7n8p -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.96.172.190:80/ ; done' Jan 13 07:06:52.887: INFO: stderr: "I0113 07:06:52.688458 1264 log.go:181] (0x40006b6bb0) (0x400063e0a0) Create stream\nI0113 07:06:52.694391 1264 log.go:181] (0x40006b6bb0) (0x400063e0a0) Stream added, broadcasting: 1\nI0113 07:06:52.708306 1264 log.go:181] (0x40006b6bb0) Reply frame received for 1\nI0113 07:06:52.709334 1264 log.go:181] (0x40006b6bb0) (0x40004bc000) Create stream\nI0113 07:06:52.709432 1264 log.go:181] (0x40006b6bb0) (0x40004bc000) Stream added, broadcasting: 3\nI0113 07:06:52.711288 1264 log.go:181] (0x40006b6bb0) Reply frame received for 3\nI0113 07:06:52.711583 1264 log.go:181] (0x40006b6bb0) (0x40004bd180) Create stream\nI0113 07:06:52.711654 1264 log.go:181] (0x40006b6bb0) (0x40004bd180) Stream added, broadcasting: 5\nI0113 07:06:52.712822 1264 log.go:181] (0x40006b6bb0) Reply frame received for 5\nI0113 07:06:52.774712 1264 log.go:181] (0x40006b6bb0) Data frame received for 3\nI0113 07:06:52.775332 1264 log.go:181] (0x40006b6bb0) Data frame received for 5\nI0113 07:06:52.775560 1264 log.go:181] (0x40004bd180) (5) Data frame handling\nI0113 07:06:52.775657 1264 log.go:181] (0x40004bc000) (3) Data frame handling\nI0113 07:06:52.776388 1264 log.go:181] (0x40004bc000) (3) Data frame sent\nI0113 07:06:52.776744 1264 log.go:181] (0x40004bd180) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.172.190:80/\nI0113 07:06:52.780684 1264 log.go:181] (0x40006b6bb0) Data frame received for 3\nI0113 07:06:52.780801 1264 log.go:181] (0x40004bc000) (3) Data frame handling\nI0113 07:06:52.780979 1264 log.go:181] (0x40004bc000) (3) Data frame sent\nI0113 07:06:52.781056 1264 log.go:181] (0x40006b6bb0) Data frame received for 5\nI0113 07:06:52.781115 1264 log.go:181] (0x40004bd180) (5) Data frame handling\nI0113 07:06:52.781187 1264 log.go:181] (0x40004bd180) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.172.190:80/\nI0113 07:06:52.784084 1264 log.go:181] (0x40006b6bb0) Data frame received for 3\nI0113 07:06:52.784189 1264 log.go:181] (0x40004bc000) (3) Data frame handling\nI0113 07:06:52.784274 1264 log.go:181] (0x40004bc000) (3) Data frame sent\nI0113 07:06:52.784340 1264 log.go:181] (0x40006b6bb0) Data frame received for 3\nI0113 07:06:52.784438 1264 log.go:181] (0x40004bc000) (3) Data frame handling\nI0113 07:06:52.784545 1264 log.go:181] (0x40004bc000) (3) Data frame sent\nI0113 07:06:52.784631 1264 log.go:181] (0x40006b6bb0) Data frame received for 5\nI0113 07:06:52.784696 1264 log.go:181] (0x40004bd180) (5) Data frame handling\nI0113 07:06:52.784802 1264 log.go:181] (0x40004bd180) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.172.190:80/\nI0113 07:06:52.789621 1264 log.go:181] (0x40006b6bb0) Data frame received for 3\nI0113 07:06:52.789739 1264 log.go:181] (0x40004bc000) (3) Data frame handling\nI0113 07:06:52.789828 1264 log.go:181] (0x40004bc000) (3) Data frame sent\nI0113 07:06:52.790201 1264 log.go:181] (0x40006b6bb0) Data frame received for 5\nI0113 07:06:52.790301 1264 log.go:181] (0x40004bd180) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.172.190:80/\nI0113 07:06:52.790391 1264 log.go:181] (0x40006b6bb0) Data frame received for 3\nI0113 07:06:52.790494 1264 log.go:181] (0x40004bc000) (3) Data frame handling\nI0113 07:06:52.790560 1264 log.go:181] (0x40004bd180) (5) Data frame sent\nI0113 07:06:52.790631 1264 log.go:181] (0x40004bc000) (3) Data frame sent\nI0113 07:06:52.794862 1264 log.go:181] (0x40006b6bb0) Data frame received for 3\nI0113 07:06:52.794939 1264 log.go:181] (0x40004bc000) (3) Data frame handling\nI0113 07:06:52.795025 1264 log.go:181] (0x40004bc000) (3) Data frame sent\nI0113 07:06:52.795723 1264 log.go:181] (0x40006b6bb0) Data frame received for 5\nI0113 07:06:52.795833 1264 log.go:181] (0x40004bd180) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.172.190:80/\nI0113 07:06:52.795920 1264 log.go:181] (0x40006b6bb0) Data frame received for 3\nI0113 07:06:52.796025 1264 log.go:181] (0x40004bc000) (3) Data frame handling\nI0113 07:06:52.796094 1264 log.go:181] (0x40004bd180) (5) Data frame sent\nI0113 07:06:52.796181 1264 log.go:181] (0x40004bc000) (3) Data frame sent\nI0113 07:06:52.800996 1264 log.go:181] (0x40006b6bb0) Data frame received for 3\nI0113 07:06:52.801118 1264 log.go:181] (0x40004bc000) (3) Data frame handling\nI0113 07:06:52.801248 1264 log.go:181] (0x40004bc000) (3) Data frame sent\nI0113 07:06:52.801571 1264 log.go:181] (0x40006b6bb0) Data frame received for 5\nI0113 07:06:52.801680 1264 log.go:181] (0x40004bd180) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.172.190:80/\nI0113 07:06:52.801765 1264 log.go:181] (0x40006b6bb0) Data frame received for 3\nI0113 07:06:52.801864 1264 log.go:181] (0x40004bc000) (3) Data frame handling\nI0113 07:06:52.801946 1264 log.go:181] (0x40004bd180) (5) Data frame sent\nI0113 07:06:52.802038 1264 log.go:181] (0x40004bc000) (3) Data frame sent\nI0113 07:06:52.805971 1264 log.go:181] (0x40006b6bb0) Data frame received for 3\nI0113 07:06:52.806086 1264 log.go:181] (0x40004bc000) (3) Data frame handling\nI0113 07:06:52.806205 1264 log.go:181] (0x40004bc000) (3) Data frame sent\nI0113 07:06:52.806310 1264 log.go:181] (0x40006b6bb0) Data frame received for 5\nI0113 07:06:52.806409 1264 log.go:181] (0x40004bd180) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.172.190:80/\nI0113 07:06:52.806537 1264 log.go:181] (0x40006b6bb0) Data frame received for 3\nI0113 07:06:52.806653 1264 log.go:181] (0x40004bc000) (3) Data frame handling\nI0113 07:06:52.806755 1264 log.go:181] (0x40004bd180) (5) Data frame sent\nI0113 07:06:52.806857 1264 log.go:181] (0x40004bc000) (3) Data frame sent\nI0113 07:06:52.811466 1264 log.go:181] (0x40006b6bb0) Data frame received for 3\nI0113 07:06:52.811622 1264 log.go:181] (0x40004bc000) (3) Data frame handling\nI0113 07:06:52.811779 1264 log.go:181] (0x40004bc000) (3) Data frame sent\nI0113 07:06:52.812755 1264 log.go:181] (0x40006b6bb0) Data frame received for 3\nI0113 07:06:52.813030 1264 log.go:181] (0x40004bc000) (3) Data frame handling\nI0113 07:06:52.813196 1264 log.go:181] (0x40004bc000) (3) Data frame sent\nI0113 07:06:52.813426 1264 log.go:181] (0x40006b6bb0) Data frame received for 5\nI0113 07:06:52.813566 1264 log.go:181] (0x40004bd180) (5) Data frame handling\nI0113 07:06:52.813717 1264 log.go:181] (0x40004bd180) (5) Data frame sent\nI0113 07:06:52.813865 1264 log.go:181] (0x40006b6bb0) Data frame received for 5\nI0113 07:06:52.814003 1264 log.go:181] (0x40004bd180) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.172.190:80/\nI0113 07:06:52.815167 1264 log.go:181] (0x40004bd180) (5) Data frame sent\nI0113 07:06:52.818555 1264 log.go:181] (0x40006b6bb0) Data frame received for 3\nI0113 07:06:52.818692 1264 log.go:181] (0x40004bc000) (3) Data frame handling\nI0113 07:06:52.818859 1264 log.go:181] (0x40004bc000) (3) Data frame sent\nI0113 07:06:52.819147 1264 log.go:181] (0x40006b6bb0) Data frame received for 5\nI0113 07:06:52.819266 1264 log.go:181] (0x40004bd180) (5) Data frame handling\nI0113 07:06:52.819347 1264 log.go:181] (0x40004bd180) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.172.190:80/\nI0113 07:06:52.819428 1264 log.go:181] (0x40006b6bb0) Data frame received for 3\nI0113 07:06:52.819499 1264 log.go:181] (0x40004bc000) (3) Data frame handling\nI0113 07:06:52.819567 1264 log.go:181] (0x40004bc000) (3) Data frame sent\nI0113 07:06:52.824650 1264 log.go:181] (0x40006b6bb0) Data frame received for 3\nI0113 07:06:52.824773 1264 log.go:181] (0x40004bc000) (3) Data frame handling\nI0113 07:06:52.825010 1264 log.go:181] (0x40004bc000) (3) Data frame sent\nI0113 07:06:52.827038 1264 log.go:181] (0x40006b6bb0) Data frame received for 3\nI0113 07:06:52.827110 1264 log.go:181] (0x40004bc000) (3) Data frame handling\nI0113 07:06:52.827176 1264 log.go:181] (0x40004bc000) (3) Data frame sent\nI0113 07:06:52.827235 1264 log.go:181] (0x40006b6bb0) Data frame received for 5\nI0113 07:06:52.827287 1264 log.go:181] (0x40004bd180) (5) Data frame handling\nI0113 07:06:52.827354 1264 log.go:181] (0x40004bd180) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.172.190:80/\nI0113 07:06:52.830728 1264 log.go:181] (0x40006b6bb0) Data frame received for 3\nI0113 07:06:52.830852 1264 log.go:181] (0x40004bc000) (3) Data frame handling\nI0113 07:06:52.830966 1264 log.go:181] (0x40004bc000) (3) Data frame sent\nI0113 07:06:52.831150 1264 log.go:181] (0x40006b6bb0) Data frame received for 5\nI0113 07:06:52.831231 1264 log.go:181] (0x40004bd180) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.172.190:80/\nI0113 07:06:52.831291 1264 log.go:181] (0x40006b6bb0) Data frame received for 3\nI0113 07:06:52.831354 1264 log.go:181] (0x40004bc000) (3) Data frame handling\nI0113 07:06:52.831420 1264 log.go:181] (0x40004bc000) (3) Data frame sent\nI0113 07:06:52.831485 1264 log.go:181] (0x40004bd180) (5) Data frame sent\nI0113 07:06:52.836393 1264 log.go:181] (0x40006b6bb0) Data frame received for 3\nI0113 07:06:52.836483 1264 log.go:181] (0x40004bc000) (3) Data frame handling\nI0113 07:06:52.836610 1264 log.go:181] (0x40004bc000) (3) Data frame sent\nI0113 07:06:52.836996 1264 log.go:181] (0x40006b6bb0) Data frame received for 5\nI0113 07:06:52.837156 1264 log.go:181] (0x40004bd180) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.172.190:80/\nI0113 07:06:52.837277 1264 log.go:181] (0x40006b6bb0) Data frame received for 3\nI0113 07:06:52.837390 1264 log.go:181] (0x40004bc000) (3) Data frame handling\nI0113 07:06:52.837481 1264 log.go:181] (0x40004bd180) (5) Data frame sent\nI0113 07:06:52.837573 1264 log.go:181] (0x40004bc000) (3) Data frame sent\nI0113 07:06:52.841423 1264 log.go:181] (0x40006b6bb0) Data frame received for 3\nI0113 07:06:52.841502 1264 log.go:181] (0x40004bc000) (3) Data frame handling\nI0113 07:06:52.841588 1264 log.go:181] (0x40004bc000) (3) Data frame sent\nI0113 07:06:52.842271 1264 log.go:181] (0x40006b6bb0) Data frame received for 3\nI0113 07:06:52.842369 1264 log.go:181] (0x40004bc000) (3) Data frame handling\nI0113 07:06:52.842511 1264 log.go:181] (0x40004bc000) (3) Data frame sent\nI0113 07:06:52.842672 1264 log.go:181] (0x40006b6bb0) Data frame received for 5\nI0113 07:06:52.842796 1264 log.go:181] (0x40004bd180) (5) Data frame handling\nI0113 07:06:52.842909 1264 log.go:181] (0x40004bd180) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.172.190:80/\nI0113 07:06:52.850577 1264 log.go:181] (0x40006b6bb0) Data frame received for 3\nI0113 07:06:52.850681 1264 log.go:181] (0x40004bc000) (3) Data frame handling\nI0113 07:06:52.850766 1264 log.go:181] (0x40004bc000) (3) Data frame sent\nI0113 07:06:52.850977 1264 log.go:181] (0x40006b6bb0) Data frame received for 5\nI0113 07:06:52.851037 1264 log.go:181] (0x40004bd180) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.172.190:80/\nI0113 07:06:52.851132 1264 log.go:181] (0x40006b6bb0) Data frame received for 3\nI0113 07:06:52.851241 1264 log.go:181] (0x40004bc000) (3) Data frame handling\nI0113 07:06:52.851369 1264 log.go:181] (0x40004bd180) (5) Data frame sent\nI0113 07:06:52.851510 1264 log.go:181] (0x40004bc000) (3) Data frame sent\nI0113 07:06:52.855351 1264 log.go:181] (0x40006b6bb0) Data frame received for 3\nI0113 07:06:52.855444 1264 log.go:181] (0x40004bc000) (3) Data frame handling\nI0113 07:06:52.855511 1264 log.go:181] (0x40004bc000) (3) Data frame sent\nI0113 07:06:52.856451 1264 log.go:181] (0x40006b6bb0) Data frame received for 3\nI0113 07:06:52.856555 1264 log.go:181] (0x40004bc000) (3) Data frame handling\nI0113 07:06:52.856666 1264 log.go:181] (0x40006b6bb0) Data frame received for 5\nI0113 07:06:52.856794 1264 log.go:181] (0x40004bd180) (5) Data frame handling\nI0113 07:06:52.857042 1264 log.go:181] (0x40004bd180) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.172.190:80/\nI0113 07:06:52.857140 1264 log.go:181] (0x40004bc000) (3) Data frame sent\nI0113 07:06:52.861592 1264 log.go:181] (0x40006b6bb0) Data frame received for 3\nI0113 07:06:52.861718 1264 log.go:181] (0x40004bc000) (3) Data frame handling\nI0113 07:06:52.861871 1264 log.go:181] (0x40004bc000) (3) Data frame sent\nI0113 07:06:52.862429 1264 log.go:181] (0x40006b6bb0) Data frame received for 3\nI0113 07:06:52.862516 1264 log.go:181] (0x40004bc000) (3) Data frame handling\nI0113 07:06:52.862587 1264 log.go:181] (0x40004bc000) (3) Data frame sent\nI0113 07:06:52.862678 1264 log.go:181] (0x40006b6bb0) Data frame received for 5\nI0113 07:06:52.862826 1264 log.go:181] (0x40004bd180) (5) Data frame handling\nI0113 07:06:52.862964 1264 log.go:181] (0x40004bd180) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2I0113 07:06:52.863094 1264 log.go:181] (0x40006b6bb0) Data frame received for 5\nI0113 07:06:52.863224 1264 log.go:181] (0x40004bd180) (5) Data frame handling\nI0113 07:06:52.863356 1264 log.go:181] (0x40004bd180) (5) Data frame sent\n http://10.96.172.190:80/\nI0113 07:06:52.867405 1264 log.go:181] (0x40006b6bb0) Data frame received for 3\nI0113 07:06:52.867491 1264 log.go:181] (0x40004bc000) (3) Data frame handling\nI0113 07:06:52.867580 1264 log.go:181] (0x40004bc000) (3) Data frame sent\nI0113 07:06:52.868205 1264 log.go:181] (0x40006b6bb0) Data frame received for 3\nI0113 07:06:52.868298 1264 log.go:181] (0x40004bc000) (3) Data frame handling\nI0113 07:06:52.868505 1264 log.go:181] (0x40006b6bb0) Data frame received for 5\nI0113 07:06:52.868583 1264 log.go:181] (0x40004bd180) (5) Data frame handling\nI0113 07:06:52.870581 1264 log.go:181] (0x40006b6bb0) Data frame received for 1\nI0113 07:06:52.870655 1264 log.go:181] (0x400063e0a0) (1) Data frame handling\nI0113 07:06:52.870776 1264 log.go:181] (0x400063e0a0) (1) Data frame sent\nI0113 07:06:52.872206 1264 log.go:181] (0x40006b6bb0) (0x400063e0a0) Stream removed, broadcasting: 1\nI0113 07:06:52.874797 1264 log.go:181] (0x40006b6bb0) Go away received\nI0113 07:06:52.878189 1264 log.go:181] (0x40006b6bb0) (0x400063e0a0) Stream removed, broadcasting: 1\nI0113 07:06:52.878893 1264 log.go:181] (0x40006b6bb0) (0x40004bc000) Stream removed, broadcasting: 3\nI0113 07:06:52.879191 1264 log.go:181] (0x40006b6bb0) (0x40004bd180) Stream removed, broadcasting: 5\n" Jan 13 07:06:52.892: INFO: stdout: "\naffinity-clusterip-tc6gd\naffinity-clusterip-tc6gd\naffinity-clusterip-tc6gd\naffinity-clusterip-tc6gd\naffinity-clusterip-tc6gd\naffinity-clusterip-tc6gd\naffinity-clusterip-tc6gd\naffinity-clusterip-tc6gd\naffinity-clusterip-tc6gd\naffinity-clusterip-tc6gd\naffinity-clusterip-tc6gd\naffinity-clusterip-tc6gd\naffinity-clusterip-tc6gd\naffinity-clusterip-tc6gd\naffinity-clusterip-tc6gd\naffinity-clusterip-tc6gd" Jan 13 07:06:52.892: INFO: Received response from host: affinity-clusterip-tc6gd Jan 13 07:06:52.892: INFO: Received response from host: affinity-clusterip-tc6gd Jan 13 07:06:52.892: INFO: Received response from host: affinity-clusterip-tc6gd Jan 13 07:06:52.892: INFO: Received response from host: affinity-clusterip-tc6gd Jan 13 07:06:52.892: INFO: Received response from host: affinity-clusterip-tc6gd Jan 13 07:06:52.892: INFO: Received response from host: affinity-clusterip-tc6gd Jan 13 07:06:52.892: INFO: Received response from host: affinity-clusterip-tc6gd Jan 13 07:06:52.892: INFO: Received response from host: affinity-clusterip-tc6gd Jan 13 07:06:52.892: INFO: Received response from host: affinity-clusterip-tc6gd Jan 13 07:06:52.892: INFO: Received response from host: affinity-clusterip-tc6gd Jan 13 07:06:52.892: INFO: Received response from host: affinity-clusterip-tc6gd Jan 13 07:06:52.893: INFO: Received response from host: affinity-clusterip-tc6gd Jan 13 07:06:52.893: INFO: Received response from host: affinity-clusterip-tc6gd Jan 13 07:06:52.893: INFO: Received response from host: affinity-clusterip-tc6gd Jan 13 07:06:52.893: INFO: Received response from host: affinity-clusterip-tc6gd Jan 13 07:06:52.893: INFO: Received response from host: affinity-clusterip-tc6gd Jan 13 07:06:52.893: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-3518, will wait for the garbage collector to delete the pods Jan 13 07:06:53.005: INFO: Deleting ReplicationController affinity-clusterip took: 11.753814ms Jan 13 07:06:53.606: INFO: Terminating ReplicationController affinity-clusterip pods took: 601.016342ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:07:10.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3518" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:33.295 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":309,"completed":108,"skipped":1745,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:07:10.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Jan 13 07:07:10.234: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d78718df-9ecb-4592-8758-45e0d43aba1f" in namespace "downward-api-9865" to be "Succeeded or Failed" Jan 13 07:07:10.297: INFO: Pod "downwardapi-volume-d78718df-9ecb-4592-8758-45e0d43aba1f": Phase="Pending", Reason="", readiness=false. Elapsed: 63.562631ms Jan 13 07:07:12.357: INFO: Pod "downwardapi-volume-d78718df-9ecb-4592-8758-45e0d43aba1f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123046091s Jan 13 07:07:14.365: INFO: Pod "downwardapi-volume-d78718df-9ecb-4592-8758-45e0d43aba1f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.130866656s STEP: Saw pod success Jan 13 07:07:14.365: INFO: Pod "downwardapi-volume-d78718df-9ecb-4592-8758-45e0d43aba1f" satisfied condition "Succeeded or Failed" Jan 13 07:07:14.370: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-d78718df-9ecb-4592-8758-45e0d43aba1f container client-container: STEP: delete the pod Jan 13 07:07:14.431: INFO: Waiting for pod downwardapi-volume-d78718df-9ecb-4592-8758-45e0d43aba1f to disappear Jan 13 07:07:14.437: INFO: Pod downwardapi-volume-d78718df-9ecb-4592-8758-45e0d43aba1f no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:07:14.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9865" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":309,"completed":109,"skipped":1747,"failed":0} SSSSSSSSSS ------------------------------ [sig-instrumentation] Events API should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:07:14.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Create set of events STEP: get a list of Events with a label in the current namespace STEP: delete a list of events Jan 13 07:07:14.634: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:07:14.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-7787" for this suite. •{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":309,"completed":110,"skipped":1757,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:07:14.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:07:14.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-3284" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":309,"completed":111,"skipped":1769,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:07:14.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 13 07:07:19.147: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:07:19.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7651" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":309,"completed":112,"skipped":1783,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:07:19.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jan 13 07:07:24.033: INFO: Successfully updated pod "pod-update-activedeadlineseconds-8cd61b8d-266d-443d-80c6-63072bdd2c10" Jan 13 07:07:24.033: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-8cd61b8d-266d-443d-80c6-63072bdd2c10" in namespace "pods-161" to be "terminated due to deadline exceeded" Jan 13 07:07:24.039: INFO: Pod "pod-update-activedeadlineseconds-8cd61b8d-266d-443d-80c6-63072bdd2c10": Phase="Running", Reason="", readiness=true. Elapsed: 5.406735ms Jan 13 07:07:26.046: INFO: Pod "pod-update-activedeadlineseconds-8cd61b8d-266d-443d-80c6-63072bdd2c10": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.012811417s Jan 13 07:07:26.047: INFO: Pod "pod-update-activedeadlineseconds-8cd61b8d-266d-443d-80c6-63072bdd2c10" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:07:26.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-161" for this suite. • [SLOW TEST:6.739 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":309,"completed":113,"skipped":1811,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:07:26.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating pod pod-subpath-test-configmap-dqql STEP: Creating a pod to test atomic-volume-subpath Jan 13 07:07:26.222: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-dqql" in namespace "subpath-769" to be "Succeeded or Failed" Jan 13 07:07:26.229: INFO: Pod "pod-subpath-test-configmap-dqql": Phase="Pending", Reason="", readiness=false. Elapsed: 6.827985ms Jan 13 07:07:28.247: INFO: Pod "pod-subpath-test-configmap-dqql": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024447135s Jan 13 07:07:30.255: INFO: Pod "pod-subpath-test-configmap-dqql": Phase="Running", Reason="", readiness=true. Elapsed: 4.032706888s Jan 13 07:07:32.267: INFO: Pod "pod-subpath-test-configmap-dqql": Phase="Running", Reason="", readiness=true. Elapsed: 6.044393775s Jan 13 07:07:34.274: INFO: Pod "pod-subpath-test-configmap-dqql": Phase="Running", Reason="", readiness=true. Elapsed: 8.051798826s Jan 13 07:07:36.280: INFO: Pod "pod-subpath-test-configmap-dqql": Phase="Running", Reason="", readiness=true. Elapsed: 10.058008743s Jan 13 07:07:38.295: INFO: Pod "pod-subpath-test-configmap-dqql": Phase="Running", Reason="", readiness=true. Elapsed: 12.072651204s Jan 13 07:07:40.301: INFO: Pod "pod-subpath-test-configmap-dqql": Phase="Running", Reason="", readiness=true. Elapsed: 14.078422042s Jan 13 07:07:42.310: INFO: Pod "pod-subpath-test-configmap-dqql": Phase="Running", Reason="", readiness=true. Elapsed: 16.088019675s Jan 13 07:07:44.319: INFO: Pod "pod-subpath-test-configmap-dqql": Phase="Running", Reason="", readiness=true. Elapsed: 18.096514982s Jan 13 07:07:46.329: INFO: Pod "pod-subpath-test-configmap-dqql": Phase="Running", Reason="", readiness=true. Elapsed: 20.106668716s Jan 13 07:07:48.338: INFO: Pod "pod-subpath-test-configmap-dqql": Phase="Running", Reason="", readiness=true. Elapsed: 22.115946776s Jan 13 07:07:50.346: INFO: Pod "pod-subpath-test-configmap-dqql": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.123698759s STEP: Saw pod success Jan 13 07:07:50.346: INFO: Pod "pod-subpath-test-configmap-dqql" satisfied condition "Succeeded or Failed" Jan 13 07:07:50.351: INFO: Trying to get logs from node leguer-worker pod pod-subpath-test-configmap-dqql container test-container-subpath-configmap-dqql: STEP: delete the pod Jan 13 07:07:50.413: INFO: Waiting for pod pod-subpath-test-configmap-dqql to disappear Jan 13 07:07:50.423: INFO: Pod pod-subpath-test-configmap-dqql no longer exists STEP: Deleting pod pod-subpath-test-configmap-dqql Jan 13 07:07:50.423: INFO: Deleting pod "pod-subpath-test-configmap-dqql" in namespace "subpath-769" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:07:50.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-769" for this suite. • [SLOW TEST:24.382 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":309,"completed":114,"skipped":1837,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:07:50.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Jan 13 07:07:50.622: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8636 27b4d9d8-e7ca-414c-98f6-282a0fe59b2c 497276 0 2021-01-13 07:07:50 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-01-13 07:07:50 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jan 13 07:07:50.623: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8636 27b4d9d8-e7ca-414c-98f6-282a0fe59b2c 497277 0 2021-01-13 07:07:50 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-01-13 07:07:50 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Jan 13 07:07:50.651: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8636 27b4d9d8-e7ca-414c-98f6-282a0fe59b2c 497278 0 2021-01-13 07:07:50 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-01-13 07:07:50 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 13 07:07:50.653: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8636 27b4d9d8-e7ca-414c-98f6-282a0fe59b2c 497279 0 2021-01-13 07:07:50 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-01-13 07:07:50 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:07:50.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8636" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":309,"completed":115,"skipped":1846,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:07:50.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 13 07:07:52.966: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created Jan 13 07:07:55.034: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746118473, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746118473, loc:(*time.Location)(0x7089440)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746118473, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746118472, loc:(*time.Location)(0x7089440)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 13 07:07:58.079: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:08:10.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4070" for this suite. STEP: Destroying namespace "webhook-4070-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:19.760 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":309,"completed":116,"skipped":1897,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:08:10.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir 0644 on node default medium Jan 13 07:08:10.571: INFO: Waiting up to 5m0s for pod "pod-03547151-fd49-4809-b672-742a73af4b48" in namespace "emptydir-392" to be "Succeeded or Failed" Jan 13 07:08:10.593: INFO: Pod "pod-03547151-fd49-4809-b672-742a73af4b48": Phase="Pending", Reason="", readiness=false. Elapsed: 21.35289ms Jan 13 07:08:12.600: INFO: Pod "pod-03547151-fd49-4809-b672-742a73af4b48": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028945712s Jan 13 07:08:14.609: INFO: Pod "pod-03547151-fd49-4809-b672-742a73af4b48": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037934329s STEP: Saw pod success Jan 13 07:08:14.610: INFO: Pod "pod-03547151-fd49-4809-b672-742a73af4b48" satisfied condition "Succeeded or Failed" Jan 13 07:08:14.616: INFO: Trying to get logs from node leguer-worker pod pod-03547151-fd49-4809-b672-742a73af4b48 container test-container: STEP: delete the pod Jan 13 07:08:14.662: INFO: Waiting for pod pod-03547151-fd49-4809-b672-742a73af4b48 to disappear Jan 13 07:08:14.672: INFO: Pod pod-03547151-fd49-4809-b672-742a73af4b48 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:08:14.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-392" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":117,"skipped":1917,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:08:14.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating projection with secret that has name projected-secret-test-map-47422baa-4c26-453c-af35-a2ef947d90c4 STEP: Creating a pod to test consume secrets Jan 13 07:08:14.797: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-54454bcc-7496-4680-847d-6accc2b96201" in namespace "projected-9599" to be "Succeeded or Failed" Jan 13 07:08:14.811: INFO: Pod "pod-projected-secrets-54454bcc-7496-4680-847d-6accc2b96201": Phase="Pending", Reason="", readiness=false. Elapsed: 14.406355ms Jan 13 07:08:16.818: INFO: Pod "pod-projected-secrets-54454bcc-7496-4680-847d-6accc2b96201": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021011704s Jan 13 07:08:18.825: INFO: Pod "pod-projected-secrets-54454bcc-7496-4680-847d-6accc2b96201": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028458034s STEP: Saw pod success Jan 13 07:08:18.825: INFO: Pod "pod-projected-secrets-54454bcc-7496-4680-847d-6accc2b96201" satisfied condition "Succeeded or Failed" Jan 13 07:08:18.831: INFO: Trying to get logs from node leguer-worker2 pod pod-projected-secrets-54454bcc-7496-4680-847d-6accc2b96201 container projected-secret-volume-test: STEP: delete the pod Jan 13 07:08:18.891: INFO: Waiting for pod pod-projected-secrets-54454bcc-7496-4680-847d-6accc2b96201 to disappear Jan 13 07:08:18.918: INFO: Pod pod-projected-secrets-54454bcc-7496-4680-847d-6accc2b96201 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:08:18.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9599" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":309,"completed":118,"skipped":1926,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:08:18.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating secret secrets-6491/secret-test-26b9ff40-9479-4133-b2b2-d1a89ddbb7fc STEP: Creating a pod to test consume secrets Jan 13 07:08:19.041: INFO: Waiting up to 5m0s for pod "pod-configmaps-54e7a226-ed58-43eb-b098-0b30088b5632" in namespace "secrets-6491" to be "Succeeded or Failed" Jan 13 07:08:19.058: INFO: Pod "pod-configmaps-54e7a226-ed58-43eb-b098-0b30088b5632": Phase="Pending", Reason="", readiness=false. Elapsed: 17.231285ms Jan 13 07:08:21.066: INFO: Pod "pod-configmaps-54e7a226-ed58-43eb-b098-0b30088b5632": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02527067s Jan 13 07:08:23.074: INFO: Pod "pod-configmaps-54e7a226-ed58-43eb-b098-0b30088b5632": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033403643s STEP: Saw pod success Jan 13 07:08:23.074: INFO: Pod "pod-configmaps-54e7a226-ed58-43eb-b098-0b30088b5632" satisfied condition "Succeeded or Failed" Jan 13 07:08:23.079: INFO: Trying to get logs from node leguer-worker pod pod-configmaps-54e7a226-ed58-43eb-b098-0b30088b5632 container env-test: STEP: delete the pod Jan 13 07:08:23.156: INFO: Waiting for pod pod-configmaps-54e7a226-ed58-43eb-b098-0b30088b5632 to disappear Jan 13 07:08:23.166: INFO: Pod pod-configmaps-54e7a226-ed58-43eb-b098-0b30088b5632 no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:08:23.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6491" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":309,"completed":119,"skipped":1989,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:08:23.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating the pod Jan 13 07:08:27.861: INFO: Successfully updated pod "labelsupdate92b25f9e-ea49-46fd-bd91-8a7eecc11c78" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:08:29.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9564" for this suite. • [SLOW TEST:6.703 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":309,"completed":120,"skipped":1991,"failed":0} SSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:08:29.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-5776 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a new StatefulSet Jan 13 07:08:30.058: INFO: Found 0 stateful pods, waiting for 3 Jan 13 07:08:40.074: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 13 07:08:40.074: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 13 07:08:40.074: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 13 07:08:50.067: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 13 07:08:50.067: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 13 07:08:50.067: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Jan 13 07:08:50.109: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Jan 13 07:09:00.196: INFO: Updating stateful set ss2 Jan 13 07:09:00.233: INFO: Waiting for Pod statefulset-5776/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 13 07:09:10.249: INFO: Waiting for Pod statefulset-5776/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 13 07:09:20.249: INFO: Waiting for Pod statefulset-5776/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Jan 13 07:09:30.953: INFO: Found 2 stateful pods, waiting for 3 Jan 13 07:09:40.963: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 13 07:09:40.963: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 13 07:09:40.963: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Jan 13 07:09:41.009: INFO: Updating stateful set ss2 Jan 13 07:09:41.132: INFO: Waiting for Pod statefulset-5776/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 13 07:09:51.148: INFO: Waiting for Pod statefulset-5776/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 13 07:10:01.165: INFO: Waiting for Pod statefulset-5776/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 13 07:10:11.146: INFO: Waiting for Pod statefulset-5776/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 13 07:10:21.147: INFO: Waiting for Pod statefulset-5776/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 13 07:10:31.184: INFO: Updating stateful set ss2 Jan 13 07:10:31.271: INFO: Waiting for StatefulSet statefulset-5776/ss2 to complete update Jan 13 07:10:31.271: INFO: Waiting for Pod statefulset-5776/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 13 07:10:41.286: INFO: Waiting for StatefulSet statefulset-5776/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jan 13 07:10:51.292: INFO: Deleting all statefulset in ns statefulset-5776 Jan 13 07:10:51.298: INFO: Scaling statefulset ss2 to 0 Jan 13 07:11:31.335: INFO: Waiting for statefulset status.replicas updated to 0 Jan 13 07:11:31.341: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:11:31.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5776" for this suite. • [SLOW TEST:181.483 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":309,"completed":121,"skipped":1995,"failed":0} [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:11:31.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1554 [It] should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jan 13 07:11:32.225: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8894 run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod' Jan 13 07:11:36.689: INFO: stderr: "" Jan 13 07:11:36.689: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Jan 13 07:11:41.742: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8894 get pod e2e-test-httpd-pod -o json' Jan 13 07:11:43.066: INFO: stderr: "" Jan 13 07:11:43.066: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2021-01-13T07:11:36Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl-run\",\n \"operation\": \"Update\",\n \"time\": \"2021-01-13T07:11:36Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.244.1.57\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2021-01-13T07:11:39Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-8894\",\n \"resourceVersion\": \"498275\",\n \"uid\": \"8e066397-2349-4b00-b3e9-72f3a96e9b4d\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-j6vv9\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"leguer-worker2\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-j6vv9\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-j6vv9\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-01-13T07:11:36Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-01-13T07:11:39Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-01-13T07:11:39Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-01-13T07:11:36Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://597f267ca32e499ae8209b180eef72cafd64d2a116cc22fd6263e94b7c4be830\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-01-13T07:11:39Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.12\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.57\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.57\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2021-01-13T07:11:36Z\"\n }\n}\n" STEP: replace the image in the pod Jan 13 07:11:43.070: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8894 replace -f -' Jan 13 07:11:46.047: INFO: stderr: "" Jan 13 07:11:46.047: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1558 Jan 13 07:11:46.054: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8894 delete pods e2e-test-httpd-pod' Jan 13 07:11:59.826: INFO: stderr: "" Jan 13 07:11:59.826: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:11:59.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8894" for this suite. • [SLOW TEST:28.468 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1551 should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":309,"completed":122,"skipped":1995,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:11:59.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating pod pod-subpath-test-configmap-d84c STEP: Creating a pod to test atomic-volume-subpath Jan 13 07:12:00.023: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-d84c" in namespace "subpath-8350" to be "Succeeded or Failed" Jan 13 07:12:00.038: INFO: Pod "pod-subpath-test-configmap-d84c": Phase="Pending", Reason="", readiness=false. Elapsed: 14.736377ms Jan 13 07:12:02.045: INFO: Pod "pod-subpath-test-configmap-d84c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021713854s Jan 13 07:12:04.053: INFO: Pod "pod-subpath-test-configmap-d84c": Phase="Running", Reason="", readiness=true. Elapsed: 4.030089761s Jan 13 07:12:06.061: INFO: Pod "pod-subpath-test-configmap-d84c": Phase="Running", Reason="", readiness=true. Elapsed: 6.037667233s Jan 13 07:12:08.069: INFO: Pod "pod-subpath-test-configmap-d84c": Phase="Running", Reason="", readiness=true. Elapsed: 8.045953508s Jan 13 07:12:10.079: INFO: Pod "pod-subpath-test-configmap-d84c": Phase="Running", Reason="", readiness=true. Elapsed: 10.055829978s Jan 13 07:12:12.086: INFO: Pod "pod-subpath-test-configmap-d84c": Phase="Running", Reason="", readiness=true. Elapsed: 12.062508943s Jan 13 07:12:14.094: INFO: Pod "pod-subpath-test-configmap-d84c": Phase="Running", Reason="", readiness=true. Elapsed: 14.070415982s Jan 13 07:12:16.102: INFO: Pod "pod-subpath-test-configmap-d84c": Phase="Running", Reason="", readiness=true. Elapsed: 16.078439483s Jan 13 07:12:18.110: INFO: Pod "pod-subpath-test-configmap-d84c": Phase="Running", Reason="", readiness=true. Elapsed: 18.086361898s Jan 13 07:12:20.119: INFO: Pod "pod-subpath-test-configmap-d84c": Phase="Running", Reason="", readiness=true. Elapsed: 20.096092153s Jan 13 07:12:22.127: INFO: Pod "pod-subpath-test-configmap-d84c": Phase="Running", Reason="", readiness=true. Elapsed: 22.104131025s Jan 13 07:12:24.135: INFO: Pod "pod-subpath-test-configmap-d84c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.11213882s STEP: Saw pod success Jan 13 07:12:24.136: INFO: Pod "pod-subpath-test-configmap-d84c" satisfied condition "Succeeded or Failed" Jan 13 07:12:24.141: INFO: Trying to get logs from node leguer-worker pod pod-subpath-test-configmap-d84c container test-container-subpath-configmap-d84c: STEP: delete the pod Jan 13 07:12:24.223: INFO: Waiting for pod pod-subpath-test-configmap-d84c to disappear Jan 13 07:12:24.228: INFO: Pod pod-subpath-test-configmap-d84c no longer exists STEP: Deleting pod pod-subpath-test-configmap-d84c Jan 13 07:12:24.228: INFO: Deleting pod "pod-subpath-test-configmap-d84c" in namespace "subpath-8350" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:12:24.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8350" for this suite. • [SLOW TEST:24.401 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":309,"completed":123,"skipped":2061,"failed":0} SSSS ------------------------------ [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:12:24.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should run through the lifecycle of a ServiceAccount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a ServiceAccount STEP: watching for the ServiceAccount to be added STEP: patching the ServiceAccount STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) STEP: deleting the ServiceAccount [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:12:24.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-8614" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":309,"completed":124,"skipped":2065,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:12:24.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:12:28.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6896" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":309,"completed":125,"skipped":2081,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:12:28.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:12:39.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6394" for this suite. • [SLOW TEST:11.171 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":309,"completed":126,"skipped":2084,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:12:39.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:12:40.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7240" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":309,"completed":127,"skipped":2102,"failed":0} SSSS ------------------------------ [sig-network] IngressClass API should support creating IngressClass API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:12:40.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingressclass STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:148 [It] should support creating IngressClass API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Jan 13 07:12:40.281: INFO: starting watch STEP: patching STEP: updating Jan 13 07:12:40.311: INFO: waiting for watch events with expected annotations Jan 13 07:12:40.312: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:12:40.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingressclass-4076" for this suite. •{"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":309,"completed":128,"skipped":2106,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:12:40.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name configmap-test-volume-map-edfa9610-15b9-4fc3-9ed9-f07d5c9a9382 STEP: Creating a pod to test consume configMaps Jan 13 07:12:40.501: INFO: Waiting up to 5m0s for pod "pod-configmaps-a2f1427e-65e8-430f-ad74-80775ff687e5" in namespace "configmap-3886" to be "Succeeded or Failed" Jan 13 07:12:40.518: INFO: Pod "pod-configmaps-a2f1427e-65e8-430f-ad74-80775ff687e5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.878111ms Jan 13 07:12:42.552: INFO: Pod "pod-configmaps-a2f1427e-65e8-430f-ad74-80775ff687e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050496291s Jan 13 07:12:44.559: INFO: Pod "pod-configmaps-a2f1427e-65e8-430f-ad74-80775ff687e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05824551s STEP: Saw pod success Jan 13 07:12:44.560: INFO: Pod "pod-configmaps-a2f1427e-65e8-430f-ad74-80775ff687e5" satisfied condition "Succeeded or Failed" Jan 13 07:12:44.564: INFO: Trying to get logs from node leguer-worker2 pod pod-configmaps-a2f1427e-65e8-430f-ad74-80775ff687e5 container agnhost-container: STEP: delete the pod Jan 13 07:12:44.632: INFO: Waiting for pod pod-configmaps-a2f1427e-65e8-430f-ad74-80775ff687e5 to disappear Jan 13 07:12:44.637: INFO: Pod pod-configmaps-a2f1427e-65e8-430f-ad74-80775ff687e5 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:12:44.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3886" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":309,"completed":129,"skipped":2109,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:12:44.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jan 13 07:12:52.843: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 13 07:12:52.877: INFO: Pod pod-with-poststart-http-hook still exists Jan 13 07:12:54.878: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 13 07:12:54.887: INFO: Pod pod-with-poststart-http-hook still exists Jan 13 07:12:56.877: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 13 07:12:56.886: INFO: Pod pod-with-poststart-http-hook still exists Jan 13 07:12:58.878: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 13 07:12:58.886: INFO: Pod pod-with-poststart-http-hook still exists Jan 13 07:13:00.878: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 13 07:13:00.884: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:13:00.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5998" for this suite. • [SLOW TEST:16.253 seconds] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:43 should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":309,"completed":130,"skipped":2122,"failed":0} SSS ------------------------------ [sig-node] PodTemplates should delete a collection of pod templates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:13:00.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of pod templates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Create set of pod templates Jan 13 07:13:01.051: INFO: created test-podtemplate-1 Jan 13 07:13:01.072: INFO: created test-podtemplate-2 Jan 13 07:13:01.079: INFO: created test-podtemplate-3 STEP: get a list of pod templates with a label in the current namespace STEP: delete collection of pod templates Jan 13 07:13:01.087: INFO: requesting DeleteCollection of pod templates STEP: check that the list of pod templates matches the requested quantity Jan 13 07:13:01.130: INFO: requesting list of pod templates to confirm quantity [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:13:01.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-8088" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":309,"completed":131,"skipped":2125,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:13:01.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should run through a ConfigMap lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a ConfigMap STEP: fetching the ConfigMap STEP: patching the ConfigMap STEP: listing all ConfigMaps in all namespaces with a label selector STEP: deleting the ConfigMap by collection with a label selector STEP: listing all ConfigMaps in test namespace [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:13:01.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-428" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":309,"completed":132,"skipped":2143,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:13:01.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Jan 13 07:13:01.460: INFO: Waiting up to 5m0s for pod "downwardapi-volume-420766ff-b688-48ea-a07d-ec266bbc2414" in namespace "projected-118" to be "Succeeded or Failed" Jan 13 07:13:01.482: INFO: Pod "downwardapi-volume-420766ff-b688-48ea-a07d-ec266bbc2414": Phase="Pending", Reason="", readiness=false. Elapsed: 21.62263ms Jan 13 07:13:03.490: INFO: Pod "downwardapi-volume-420766ff-b688-48ea-a07d-ec266bbc2414": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029573448s Jan 13 07:13:05.499: INFO: Pod "downwardapi-volume-420766ff-b688-48ea-a07d-ec266bbc2414": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038642314s STEP: Saw pod success Jan 13 07:13:05.499: INFO: Pod "downwardapi-volume-420766ff-b688-48ea-a07d-ec266bbc2414" satisfied condition "Succeeded or Failed" Jan 13 07:13:05.507: INFO: Trying to get logs from node leguer-worker2 pod downwardapi-volume-420766ff-b688-48ea-a07d-ec266bbc2414 container client-container: STEP: delete the pod Jan 13 07:13:05.531: INFO: Waiting for pod downwardapi-volume-420766ff-b688-48ea-a07d-ec266bbc2414 to disappear Jan 13 07:13:05.545: INFO: Pod downwardapi-volume-420766ff-b688-48ea-a07d-ec266bbc2414 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:13:05.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-118" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":309,"completed":133,"skipped":2199,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:13:05.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:13:21.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6810" for this suite. • [SLOW TEST:16.387 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":309,"completed":134,"skipped":2258,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:13:21.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:13:40.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4170" for this suite. • [SLOW TEST:18.686 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":309,"completed":135,"skipped":2259,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:13:40.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 Jan 13 07:13:40.719: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 13 07:13:40.740: INFO: Waiting for terminating namespaces to be deleted... Jan 13 07:13:40.745: INFO: Logging pods the apiserver thinks is on node leguer-worker before test Jan 13 07:13:40.756: INFO: rally-a8f48c6d-3kmika18-pdtzv from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 07:13:40.756: INFO: Container rally-a8f48c6d-3kmika18 ready: true, restart count 0 Jan 13 07:13:40.757: INFO: rally-a8f48c6d-3kmika18-pllzg from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 07:13:40.757: INFO: Container rally-a8f48c6d-3kmika18 ready: true, restart count 0 Jan 13 07:13:40.757: INFO: rally-a8f48c6d-4cyi45kq-j5tzz from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 07:13:40.757: INFO: Container rally-a8f48c6d-4cyi45kq ready: true, restart count 0 Jan 13 07:13:40.757: INFO: rally-a8f48c6d-f3hls6a3-57dwc from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Jan 13 07:13:40.757: INFO: Container rally-a8f48c6d-f3hls6a3 ready: true, restart count 0 Jan 13 07:13:40.757: INFO: rally-a8f48c6d-1y3amfc0-lp8st from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Jan 13 07:13:40.757: INFO: Container rally-a8f48c6d-1y3amfc0 ready: true, restart count 0 Jan 13 07:13:40.757: INFO: rally-a8f48c6d-9pqmjehi-9zwjj from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 07:13:40.757: INFO: Container rally-a8f48c6d-9pqmjehi ready: true, restart count 0 Jan 13 07:13:40.757: INFO: chaos-controller-manager-69c479c674-s796v from default started at 2021-01-10 20:58:24 +0000 UTC (1 container statuses recorded) Jan 13 07:13:40.757: INFO: Container chaos-mesh ready: true, restart count 0 Jan 13 07:13:40.757: INFO: chaos-daemon-lv692 from default started at 2021-01-10 20:58:25 +0000 UTC (1 container statuses recorded) Jan 13 07:13:40.757: INFO: Container chaos-daemon ready: true, restart count 0 Jan 13 07:13:40.757: INFO: kindnet-psm25 from kube-system started at 2021-01-10 17:38:10 +0000 UTC (1 container statuses recorded) Jan 13 07:13:40.757: INFO: Container kindnet-cni ready: true, restart count 0 Jan 13 07:13:40.757: INFO: kube-proxy-bmbcs from kube-system started at 2021-01-10 17:38:10 +0000 UTC (1 container statuses recorded) Jan 13 07:13:40.757: INFO: Container kube-proxy ready: true, restart count 0 Jan 13 07:13:40.757: INFO: Logging pods the apiserver thinks is on node leguer-worker2 before test Jan 13 07:13:40.771: INFO: rally-a8f48c6d-4cyi45kq-knr4r from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 07:13:40.771: INFO: Container rally-a8f48c6d-4cyi45kq ready: true, restart count 0 Jan 13 07:13:40.771: INFO: rally-a8f48c6d-f3hls6a3-dwt8n from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Jan 13 07:13:40.771: INFO: Container rally-a8f48c6d-f3hls6a3 ready: true, restart count 0 Jan 13 07:13:40.772: INFO: rally-a8f48c6d-1y3amfc0-hh9qk from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Jan 13 07:13:40.772: INFO: Container rally-a8f48c6d-1y3amfc0 ready: true, restart count 0 Jan 13 07:13:40.772: INFO: rally-a8f48c6d-9pqmjehi-85slb from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 07:13:40.772: INFO: Container rally-a8f48c6d-9pqmjehi ready: true, restart count 0 Jan 13 07:13:40.772: INFO: rally-a8f48c6d-vnukxqu0-llj24 from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 07:13:40.772: INFO: Container rally-a8f48c6d-vnukxqu0 ready: true, restart count 0 Jan 13 07:13:40.772: INFO: rally-a8f48c6d-vnukxqu0-v85kr from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 07:13:40.772: INFO: Container rally-a8f48c6d-vnukxqu0 ready: true, restart count 0 Jan 13 07:13:40.772: INFO: chaos-daemon-ffkg7 from default started at 2021-01-10 20:58:25 +0000 UTC (1 container statuses recorded) Jan 13 07:13:40.772: INFO: Container chaos-daemon ready: true, restart count 0 Jan 13 07:13:40.772: INFO: kindnet-8wggd from kube-system started at 2021-01-10 17:38:10 +0000 UTC (1 container statuses recorded) Jan 13 07:13:40.772: INFO: Container kindnet-cni ready: true, restart count 0 Jan 13 07:13:40.772: INFO: kube-proxy-29gxg from kube-system started at 2021-01-10 17:38:09 +0000 UTC (1 container statuses recorded) Jan 13 07:13:40.772: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: verifying the node has the label node leguer-worker STEP: verifying the node has the label node leguer-worker2 Jan 13 07:13:40.920: INFO: Pod rally-a8f48c6d-3kmika18-pdtzv requesting resource cpu=0m on Node leguer-worker Jan 13 07:13:40.920: INFO: Pod rally-a8f48c6d-3kmika18-pllzg requesting resource cpu=0m on Node leguer-worker Jan 13 07:13:40.921: INFO: Pod rally-a8f48c6d-4cyi45kq-j5tzz requesting resource cpu=0m on Node leguer-worker Jan 13 07:13:40.921: INFO: Pod rally-a8f48c6d-4cyi45kq-knr4r requesting resource cpu=0m on Node leguer-worker2 Jan 13 07:13:40.921: INFO: Pod rally-a8f48c6d-f3hls6a3-57dwc requesting resource cpu=0m on Node leguer-worker Jan 13 07:13:40.921: INFO: Pod rally-a8f48c6d-f3hls6a3-dwt8n requesting resource cpu=0m on Node leguer-worker2 Jan 13 07:13:40.921: INFO: Pod rally-a8f48c6d-1y3amfc0-hh9qk requesting resource cpu=0m on Node leguer-worker2 Jan 13 07:13:40.921: INFO: Pod rally-a8f48c6d-1y3amfc0-lp8st requesting resource cpu=0m on Node leguer-worker Jan 13 07:13:40.921: INFO: Pod rally-a8f48c6d-9pqmjehi-85slb requesting resource cpu=0m on Node leguer-worker2 Jan 13 07:13:40.921: INFO: Pod rally-a8f48c6d-9pqmjehi-9zwjj requesting resource cpu=0m on Node leguer-worker Jan 13 07:13:40.921: INFO: Pod rally-a8f48c6d-vnukxqu0-llj24 requesting resource cpu=0m on Node leguer-worker2 Jan 13 07:13:40.921: INFO: Pod rally-a8f48c6d-vnukxqu0-v85kr requesting resource cpu=0m on Node leguer-worker2 Jan 13 07:13:40.921: INFO: Pod chaos-controller-manager-69c479c674-s796v requesting resource cpu=25m on Node leguer-worker Jan 13 07:13:40.921: INFO: Pod chaos-daemon-ffkg7 requesting resource cpu=0m on Node leguer-worker2 Jan 13 07:13:40.921: INFO: Pod chaos-daemon-lv692 requesting resource cpu=0m on Node leguer-worker Jan 13 07:13:40.921: INFO: Pod kindnet-8wggd requesting resource cpu=100m on Node leguer-worker2 Jan 13 07:13:40.921: INFO: Pod kindnet-psm25 requesting resource cpu=100m on Node leguer-worker Jan 13 07:13:40.921: INFO: Pod kube-proxy-29gxg requesting resource cpu=0m on Node leguer-worker2 Jan 13 07:13:40.921: INFO: Pod kube-proxy-bmbcs requesting resource cpu=0m on Node leguer-worker STEP: Starting Pods to consume most of the cluster CPU. Jan 13 07:13:40.921: INFO: Creating a pod which consumes cpu=11112m on Node leguer-worker Jan 13 07:13:40.934: INFO: Creating a pod which consumes cpu=11130m on Node leguer-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-89a49234-e11e-494c-a13b-b81f99daac59.1659b93ec1de7c43], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9827/filler-pod-89a49234-e11e-494c-a13b-b81f99daac59 to leguer-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-89a49234-e11e-494c-a13b-b81f99daac59.1659b93f0fcfe7b1], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-89a49234-e11e-494c-a13b-b81f99daac59.1659b93f680f247e], Reason = [Created], Message = [Created container filler-pod-89a49234-e11e-494c-a13b-b81f99daac59] STEP: Considering event: Type = [Normal], Name = [filler-pod-89a49234-e11e-494c-a13b-b81f99daac59.1659b93f819b34da], Reason = [Started], Message = [Started container filler-pod-89a49234-e11e-494c-a13b-b81f99daac59] STEP: Considering event: Type = [Normal], Name = [filler-pod-f646b6f2-bd4b-471b-b73f-ab0bc934f779.1659b93ec4fd8b72], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9827/filler-pod-f646b6f2-bd4b-471b-b73f-ab0bc934f779 to leguer-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-f646b6f2-bd4b-471b-b73f-ab0bc934f779.1659b93f2a13eb52], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-f646b6f2-bd4b-471b-b73f-ab0bc934f779.1659b93f8ac811f8], Reason = [Created], Message = [Created container filler-pod-f646b6f2-bd4b-471b-b73f-ab0bc934f779] STEP: Considering event: Type = [Normal], Name = [filler-pod-f646b6f2-bd4b-471b-b73f-ab0bc934f779.1659b93f9a6ce7e3], Reason = [Started], Message = [Started container filler-pod-f646b6f2-bd4b-471b-b73f-ab0bc934f779] STEP: Considering event: Type = [Warning], Name = [additional-pod.1659b93fb5ef6994], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node leguer-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node leguer-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:13:46.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9827" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 • [SLOW TEST:5.541 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":309,"completed":136,"skipped":2271,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:13:46.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Jan 13 07:13:46.362: INFO: Waiting up to 1m0s for all nodes to be ready Jan 13 07:14:46.458: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Create pods that use 2/3 of node resources. Jan 13 07:14:46.518: INFO: Created pod: pod0-sched-preemption-low-priority Jan 13 07:14:46.600: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a critical pod that use same resources as that of a lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:15:14.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-5068" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:88.656 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":309,"completed":137,"skipped":2316,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:15:14.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Jan 13 07:15:15.355: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Jan 13 07:16:45.587: INFO: >>> kubeConfig: /root/.kube/config Jan 13 07:17:08.237: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:18:39.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8452" for this suite. • [SLOW TEST:204.229 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":309,"completed":138,"skipped":2334,"failed":0} [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:18:39.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name configmap-test-volume-map-d38832a6-13a3-4592-afd2-6ca7efb90f52 STEP: Creating a pod to test consume configMaps Jan 13 07:18:39.212: INFO: Waiting up to 5m0s for pod "pod-configmaps-34b4c540-20c0-4c70-bafc-ff98dfd6a6b7" in namespace "configmap-1778" to be "Succeeded or Failed" Jan 13 07:18:39.279: INFO: Pod "pod-configmaps-34b4c540-20c0-4c70-bafc-ff98dfd6a6b7": Phase="Pending", Reason="", readiness=false. Elapsed: 67.171613ms Jan 13 07:18:41.322: INFO: Pod "pod-configmaps-34b4c540-20c0-4c70-bafc-ff98dfd6a6b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109793485s Jan 13 07:18:43.328: INFO: Pod "pod-configmaps-34b4c540-20c0-4c70-bafc-ff98dfd6a6b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.116352289s STEP: Saw pod success Jan 13 07:18:43.328: INFO: Pod "pod-configmaps-34b4c540-20c0-4c70-bafc-ff98dfd6a6b7" satisfied condition "Succeeded or Failed" Jan 13 07:18:43.334: INFO: Trying to get logs from node leguer-worker pod pod-configmaps-34b4c540-20c0-4c70-bafc-ff98dfd6a6b7 container agnhost-container: STEP: delete the pod Jan 13 07:18:43.386: INFO: Waiting for pod pod-configmaps-34b4c540-20c0-4c70-bafc-ff98dfd6a6b7 to disappear Jan 13 07:18:43.396: INFO: Pod pod-configmaps-34b4c540-20c0-4c70-bafc-ff98dfd6a6b7 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:18:43.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1778" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":139,"skipped":2334,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:18:43.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 07:18:43.487: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Jan 13 07:18:45.618: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:18:45.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-621" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":309,"completed":140,"skipped":2344,"failed":0} SS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:18:45.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 07:18:49.808: INFO: Deleting pod "var-expansion-a6e08882-f82a-471b-aff5-af2c9c61da8e" in namespace "var-expansion-6434" Jan 13 07:18:49.815: INFO: Wait up to 5m0s for pod "var-expansion-a6e08882-f82a-471b-aff5-af2c9c61da8e" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:19:31.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6434" for this suite. • [SLOW TEST:46.191 seconds] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":309,"completed":141,"skipped":2346,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:19:31.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0113 07:19:33.598755 10 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Jan 13 07:20:35.667: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:20:35.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-519" for this suite. • [SLOW TEST:63.820 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":309,"completed":142,"skipped":2359,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:20:35.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Jan 13 07:20:38.598: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Jan 13 07:20:40.621: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746119238, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746119238, loc:(*time.Location)(0x7089440)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746119238, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746119238, loc:(*time.Location)(0x7089440)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-7d6697c5b7\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 13 07:20:43.700: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 07:20:43.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:20:44.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-9221" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:9.363 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":309,"completed":143,"skipped":2361,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:20:45.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test env composition Jan 13 07:20:45.186: INFO: Waiting up to 5m0s for pod "var-expansion-8703b35e-71ce-4cd0-96d5-32718dd59c9a" in namespace "var-expansion-5413" to be "Succeeded or Failed" Jan 13 07:20:45.227: INFO: Pod "var-expansion-8703b35e-71ce-4cd0-96d5-32718dd59c9a": Phase="Pending", Reason="", readiness=false. Elapsed: 40.886631ms Jan 13 07:20:47.299: INFO: Pod "var-expansion-8703b35e-71ce-4cd0-96d5-32718dd59c9a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112931938s Jan 13 07:20:49.347: INFO: Pod "var-expansion-8703b35e-71ce-4cd0-96d5-32718dd59c9a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.161291433s Jan 13 07:20:51.381: INFO: Pod "var-expansion-8703b35e-71ce-4cd0-96d5-32718dd59c9a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.195625433s STEP: Saw pod success Jan 13 07:20:51.382: INFO: Pod "var-expansion-8703b35e-71ce-4cd0-96d5-32718dd59c9a" satisfied condition "Succeeded or Failed" Jan 13 07:20:51.393: INFO: Trying to get logs from node leguer-worker pod var-expansion-8703b35e-71ce-4cd0-96d5-32718dd59c9a container dapi-container: STEP: delete the pod Jan 13 07:20:51.444: INFO: Waiting for pod var-expansion-8703b35e-71ce-4cd0-96d5-32718dd59c9a to disappear Jan 13 07:20:51.453: INFO: Pod var-expansion-8703b35e-71ce-4cd0-96d5-32718dd59c9a no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:20:51.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5413" for this suite. • [SLOW TEST:6.418 seconds] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":309,"completed":144,"skipped":2370,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:20:51.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name configmap-test-volume-56d2c653-9a7b-4d68-b761-57017244087e STEP: Creating a pod to test consume configMaps Jan 13 07:20:51.569: INFO: Waiting up to 5m0s for pod "pod-configmaps-458d6cc5-be26-4582-b4b1-d413854dbb97" in namespace "configmap-4899" to be "Succeeded or Failed" Jan 13 07:20:51.592: INFO: Pod "pod-configmaps-458d6cc5-be26-4582-b4b1-d413854dbb97": Phase="Pending", Reason="", readiness=false. Elapsed: 23.090193ms Jan 13 07:20:53.601: INFO: Pod "pod-configmaps-458d6cc5-be26-4582-b4b1-d413854dbb97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031820414s Jan 13 07:20:55.607: INFO: Pod "pod-configmaps-458d6cc5-be26-4582-b4b1-d413854dbb97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038236352s STEP: Saw pod success Jan 13 07:20:55.608: INFO: Pod "pod-configmaps-458d6cc5-be26-4582-b4b1-d413854dbb97" satisfied condition "Succeeded or Failed" Jan 13 07:20:55.612: INFO: Trying to get logs from node leguer-worker pod pod-configmaps-458d6cc5-be26-4582-b4b1-d413854dbb97 container configmap-volume-test: STEP: delete the pod Jan 13 07:20:55.631: INFO: Waiting for pod pod-configmaps-458d6cc5-be26-4582-b4b1-d413854dbb97 to disappear Jan 13 07:20:55.699: INFO: Pod pod-configmaps-458d6cc5-be26-4582-b4b1-d413854dbb97 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:20:55.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4899" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":309,"completed":145,"skipped":2383,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:20:55.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating secret with name projected-secret-test-8ec538e2-d4c9-459f-9c18-0645a5cdd1c6 STEP: Creating a pod to test consume secrets Jan 13 07:20:55.859: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-16504fd4-cc37-4cbb-b078-02bf5f5be2e8" in namespace "projected-7158" to be "Succeeded or Failed" Jan 13 07:20:55.882: INFO: Pod "pod-projected-secrets-16504fd4-cc37-4cbb-b078-02bf5f5be2e8": Phase="Pending", Reason="", readiness=false. Elapsed: 22.497638ms Jan 13 07:20:57.890: INFO: Pod "pod-projected-secrets-16504fd4-cc37-4cbb-b078-02bf5f5be2e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030226477s Jan 13 07:20:59.898: INFO: Pod "pod-projected-secrets-16504fd4-cc37-4cbb-b078-02bf5f5be2e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038659681s STEP: Saw pod success Jan 13 07:20:59.899: INFO: Pod "pod-projected-secrets-16504fd4-cc37-4cbb-b078-02bf5f5be2e8" satisfied condition "Succeeded or Failed" Jan 13 07:20:59.904: INFO: Trying to get logs from node leguer-worker pod pod-projected-secrets-16504fd4-cc37-4cbb-b078-02bf5f5be2e8 container secret-volume-test: STEP: delete the pod Jan 13 07:21:00.283: INFO: Waiting for pod pod-projected-secrets-16504fd4-cc37-4cbb-b078-02bf5f5be2e8 to disappear Jan 13 07:21:00.293: INFO: Pod pod-projected-secrets-16504fd4-cc37-4cbb-b078-02bf5f5be2e8 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:21:00.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7158" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":309,"completed":146,"skipped":2396,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:21:00.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Jan 13 07:21:00.451: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3f49a3db-8ca9-41c2-a188-bdea860d1d2d" in namespace "projected-1124" to be "Succeeded or Failed" Jan 13 07:21:00.455: INFO: Pod "downwardapi-volume-3f49a3db-8ca9-41c2-a188-bdea860d1d2d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.004451ms Jan 13 07:21:02.466: INFO: Pod "downwardapi-volume-3f49a3db-8ca9-41c2-a188-bdea860d1d2d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014458346s Jan 13 07:21:04.474: INFO: Pod "downwardapi-volume-3f49a3db-8ca9-41c2-a188-bdea860d1d2d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022947719s STEP: Saw pod success Jan 13 07:21:04.475: INFO: Pod "downwardapi-volume-3f49a3db-8ca9-41c2-a188-bdea860d1d2d" satisfied condition "Succeeded or Failed" Jan 13 07:21:04.479: INFO: Trying to get logs from node leguer-worker2 pod downwardapi-volume-3f49a3db-8ca9-41c2-a188-bdea860d1d2d container client-container: STEP: delete the pod Jan 13 07:21:04.748: INFO: Waiting for pod downwardapi-volume-3f49a3db-8ca9-41c2-a188-bdea860d1d2d to disappear Jan 13 07:21:04.753: INFO: Pod downwardapi-volume-3f49a3db-8ca9-41c2-a188-bdea860d1d2d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:21:04.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1124" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":147,"skipped":2406,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:21:04.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Jan 13 07:21:04.944: INFO: Waiting up to 5m0s for pod "downwardapi-volume-81644098-c701-42cd-91ab-9b1dd8be5ed1" in namespace "projected-1539" to be "Succeeded or Failed" Jan 13 07:21:04.966: INFO: Pod "downwardapi-volume-81644098-c701-42cd-91ab-9b1dd8be5ed1": Phase="Pending", Reason="", readiness=false. Elapsed: 21.510944ms Jan 13 07:21:06.973: INFO: Pod "downwardapi-volume-81644098-c701-42cd-91ab-9b1dd8be5ed1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028501491s Jan 13 07:21:08.982: INFO: Pod "downwardapi-volume-81644098-c701-42cd-91ab-9b1dd8be5ed1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03696835s STEP: Saw pod success Jan 13 07:21:08.982: INFO: Pod "downwardapi-volume-81644098-c701-42cd-91ab-9b1dd8be5ed1" satisfied condition "Succeeded or Failed" Jan 13 07:21:08.988: INFO: Trying to get logs from node leguer-worker2 pod downwardapi-volume-81644098-c701-42cd-91ab-9b1dd8be5ed1 container client-container: STEP: delete the pod Jan 13 07:21:09.042: INFO: Waiting for pod downwardapi-volume-81644098-c701-42cd-91ab-9b1dd8be5ed1 to disappear Jan 13 07:21:09.060: INFO: Pod downwardapi-volume-81644098-c701-42cd-91ab-9b1dd8be5ed1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:21:09.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1539" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":148,"skipped":2409,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:21:09.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating Pod STEP: Reading file content from the nginx-container Jan 13 07:21:15.259: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-6487 PodName:pod-sharedvolume-384b3a6b-9e31-499d-871f-65cf0a00e4dd ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 07:21:15.259: INFO: >>> kubeConfig: /root/.kube/config I0113 07:21:15.312471 10 log.go:181] (0x40008afd90) (0x40011e2140) Create stream I0113 07:21:15.312762 10 log.go:181] (0x40008afd90) (0x40011e2140) Stream added, broadcasting: 1 I0113 07:21:15.316151 10 log.go:181] (0x40008afd90) Reply frame received for 1 I0113 07:21:15.316353 10 log.go:181] (0x40008afd90) (0x4003dc4960) Create stream I0113 07:21:15.316459 10 log.go:181] (0x40008afd90) (0x4003dc4960) Stream added, broadcasting: 3 I0113 07:21:15.318268 10 log.go:181] (0x40008afd90) Reply frame received for 3 I0113 07:21:15.318428 10 log.go:181] (0x40008afd90) (0x40011e21e0) Create stream I0113 07:21:15.318509 10 log.go:181] (0x40008afd90) (0x40011e21e0) Stream added, broadcasting: 5 I0113 07:21:15.319991 10 log.go:181] (0x40008afd90) Reply frame received for 5 I0113 07:21:15.411128 10 log.go:181] (0x40008afd90) Data frame received for 5 I0113 07:21:15.411323 10 log.go:181] (0x40011e21e0) (5) Data frame handling I0113 07:21:15.411501 10 log.go:181] (0x40008afd90) Data frame received for 3 I0113 07:21:15.411687 10 log.go:181] (0x4003dc4960) (3) Data frame handling I0113 07:21:15.411844 10 log.go:181] (0x4003dc4960) (3) Data frame sent I0113 07:21:15.411970 10 log.go:181] (0x40008afd90) Data frame received for 3 I0113 07:21:15.412088 10 log.go:181] (0x4003dc4960) (3) Data frame handling I0113 07:21:15.412780 10 log.go:181] (0x40008afd90) Data frame received for 1 I0113 07:21:15.413083 10 log.go:181] (0x40011e2140) (1) Data frame handling I0113 07:21:15.413247 10 log.go:181] (0x40011e2140) (1) Data frame sent I0113 07:21:15.413376 10 log.go:181] (0x40008afd90) (0x40011e2140) Stream removed, broadcasting: 1 I0113 07:21:15.413539 10 log.go:181] (0x40008afd90) Go away received I0113 07:21:15.413861 10 log.go:181] (0x40008afd90) (0x40011e2140) Stream removed, broadcasting: 1 I0113 07:21:15.413995 10 log.go:181] (0x40008afd90) (0x4003dc4960) Stream removed, broadcasting: 3 I0113 07:21:15.414106 10 log.go:181] (0x40008afd90) (0x40011e21e0) Stream removed, broadcasting: 5 Jan 13 07:21:15.414: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:21:15.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6487" for this suite. • [SLOW TEST:6.354 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45 pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":309,"completed":149,"skipped":2416,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:21:15.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 13 07:21:18.376: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 13 07:21:20.393: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746119278, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746119278, loc:(*time.Location)(0x7089440)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746119278, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746119278, loc:(*time.Location)(0x7089440)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 13 07:21:23.484: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 07:21:23.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-485-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:21:24.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2130" for this suite. STEP: Destroying namespace "webhook-2130-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:9.404 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":309,"completed":150,"skipped":2417,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:21:24.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Jan 13 07:21:24.947: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a4fc9247-20f9-4ea0-9003-32c060db11e1" in namespace "downward-api-5430" to be "Succeeded or Failed" Jan 13 07:21:24.959: INFO: Pod "downwardapi-volume-a4fc9247-20f9-4ea0-9003-32c060db11e1": Phase="Pending", Reason="", readiness=false. Elapsed: 11.783774ms Jan 13 07:21:26.966: INFO: Pod "downwardapi-volume-a4fc9247-20f9-4ea0-9003-32c060db11e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01902273s Jan 13 07:21:28.975: INFO: Pod "downwardapi-volume-a4fc9247-20f9-4ea0-9003-32c060db11e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027336056s STEP: Saw pod success Jan 13 07:21:28.975: INFO: Pod "downwardapi-volume-a4fc9247-20f9-4ea0-9003-32c060db11e1" satisfied condition "Succeeded or Failed" Jan 13 07:21:28.982: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-a4fc9247-20f9-4ea0-9003-32c060db11e1 container client-container: STEP: delete the pod Jan 13 07:21:29.016: INFO: Waiting for pod downwardapi-volume-a4fc9247-20f9-4ea0-9003-32c060db11e1 to disappear Jan 13 07:21:29.029: INFO: Pod downwardapi-volume-a4fc9247-20f9-4ea0-9003-32c060db11e1 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:21:29.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5430" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":309,"completed":151,"skipped":2428,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:21:29.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Performing setup for networking test in namespace pod-network-test-4079 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 13 07:21:29.366: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jan 13 07:21:29.457: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 13 07:21:31.464: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 13 07:21:33.465: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 13 07:21:35.463: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 13 07:21:37.465: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 13 07:21:39.466: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 13 07:21:41.464: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 13 07:21:43.466: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 13 07:21:45.465: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 13 07:21:47.466: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 13 07:21:49.465: INFO: The status of Pod netserver-0 is Running (Ready = true) Jan 13 07:21:49.475: INFO: The status of Pod netserver-1 is Running (Ready = false) Jan 13 07:21:51.483: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jan 13 07:21:55.559: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Jan 13 07:21:55.559: INFO: Breadth first check of 10.244.2.4 on host 172.18.0.13... Jan 13 07:21:55.564: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.70:9080/dial?request=hostname&protocol=udp&host=10.244.2.4&port=8081&tries=1'] Namespace:pod-network-test-4079 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 07:21:55.565: INFO: >>> kubeConfig: /root/.kube/config I0113 07:21:55.624563 10 log.go:181] (0x40008191e0) (0x400338d4a0) Create stream I0113 07:21:55.624762 10 log.go:181] (0x40008191e0) (0x400338d4a0) Stream added, broadcasting: 1 I0113 07:21:55.628687 10 log.go:181] (0x40008191e0) Reply frame received for 1 I0113 07:21:55.628943 10 log.go:181] (0x40008191e0) (0x4003fd45a0) Create stream I0113 07:21:55.629039 10 log.go:181] (0x40008191e0) (0x4003fd45a0) Stream added, broadcasting: 3 I0113 07:21:55.630543 10 log.go:181] (0x40008191e0) Reply frame received for 3 I0113 07:21:55.630724 10 log.go:181] (0x40008191e0) (0x4003fd4640) Create stream I0113 07:21:55.630809 10 log.go:181] (0x40008191e0) (0x4003fd4640) Stream added, broadcasting: 5 I0113 07:21:55.632041 10 log.go:181] (0x40008191e0) Reply frame received for 5 I0113 07:21:55.715022 10 log.go:181] (0x40008191e0) Data frame received for 3 I0113 07:21:55.715238 10 log.go:181] (0x4003fd45a0) (3) Data frame handling I0113 07:21:55.715401 10 log.go:181] (0x4003fd45a0) (3) Data frame sent I0113 07:21:55.715825 10 log.go:181] (0x40008191e0) Data frame received for 3 I0113 07:21:55.715991 10 log.go:181] (0x4003fd45a0) (3) Data frame handling I0113 07:21:55.716164 10 log.go:181] (0x40008191e0) Data frame received for 5 I0113 07:21:55.716307 10 log.go:181] (0x4003fd4640) (5) Data frame handling I0113 07:21:55.717993 10 log.go:181] (0x40008191e0) Data frame received for 1 I0113 07:21:55.718112 10 log.go:181] (0x400338d4a0) (1) Data frame handling I0113 07:21:55.718223 10 log.go:181] (0x400338d4a0) (1) Data frame sent I0113 07:21:55.718341 10 log.go:181] (0x40008191e0) (0x400338d4a0) Stream removed, broadcasting: 1 I0113 07:21:55.718481 10 log.go:181] (0x40008191e0) Go away received I0113 07:21:55.718837 10 log.go:181] (0x40008191e0) (0x400338d4a0) Stream removed, broadcasting: 1 I0113 07:21:55.719018 10 log.go:181] (0x40008191e0) (0x4003fd45a0) Stream removed, broadcasting: 3 I0113 07:21:55.719148 10 log.go:181] (0x40008191e0) (0x4003fd4640) Stream removed, broadcasting: 5 Jan 13 07:21:55.720: INFO: Waiting for responses: map[] Jan 13 07:21:55.721: INFO: reached 10.244.2.4 after 0/1 tries Jan 13 07:21:55.721: INFO: Breadth first check of 10.244.1.69 on host 172.18.0.12... Jan 13 07:21:55.727: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.70:9080/dial?request=hostname&protocol=udp&host=10.244.1.69&port=8081&tries=1'] Namespace:pod-network-test-4079 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 07:21:55.727: INFO: >>> kubeConfig: /root/.kube/config I0113 07:21:55.779648 10 log.go:181] (0x4003cf6790) (0x4003eeeb40) Create stream I0113 07:21:55.779788 10 log.go:181] (0x4003cf6790) (0x4003eeeb40) Stream added, broadcasting: 1 I0113 07:21:55.783691 10 log.go:181] (0x4003cf6790) Reply frame received for 1 I0113 07:21:55.783945 10 log.go:181] (0x4003cf6790) (0x4003eeebe0) Create stream I0113 07:21:55.784059 10 log.go:181] (0x4003cf6790) (0x4003eeebe0) Stream added, broadcasting: 3 I0113 07:21:55.785956 10 log.go:181] (0x4003cf6790) Reply frame received for 3 I0113 07:21:55.786157 10 log.go:181] (0x4003cf6790) (0x400072a960) Create stream I0113 07:21:55.786287 10 log.go:181] (0x4003cf6790) (0x400072a960) Stream added, broadcasting: 5 I0113 07:21:55.787736 10 log.go:181] (0x4003cf6790) Reply frame received for 5 I0113 07:21:55.859851 10 log.go:181] (0x4003cf6790) Data frame received for 3 I0113 07:21:55.860032 10 log.go:181] (0x4003eeebe0) (3) Data frame handling I0113 07:21:55.860162 10 log.go:181] (0x4003eeebe0) (3) Data frame sent I0113 07:21:55.860327 10 log.go:181] (0x4003cf6790) Data frame received for 3 I0113 07:21:55.860486 10 log.go:181] (0x4003eeebe0) (3) Data frame handling I0113 07:21:55.860744 10 log.go:181] (0x4003cf6790) Data frame received for 5 I0113 07:21:55.861099 10 log.go:181] (0x400072a960) (5) Data frame handling I0113 07:21:55.861920 10 log.go:181] (0x4003cf6790) Data frame received for 1 I0113 07:21:55.862029 10 log.go:181] (0x4003eeeb40) (1) Data frame handling I0113 07:21:55.862131 10 log.go:181] (0x4003eeeb40) (1) Data frame sent I0113 07:21:55.862287 10 log.go:181] (0x4003cf6790) (0x4003eeeb40) Stream removed, broadcasting: 1 I0113 07:21:55.862457 10 log.go:181] (0x4003cf6790) Go away received I0113 07:21:55.863397 10 log.go:181] (0x4003cf6790) (0x4003eeeb40) Stream removed, broadcasting: 1 I0113 07:21:55.863597 10 log.go:181] (0x4003cf6790) (0x4003eeebe0) Stream removed, broadcasting: 3 I0113 07:21:55.863717 10 log.go:181] (0x4003cf6790) (0x400072a960) Stream removed, broadcasting: 5 Jan 13 07:21:55.863: INFO: Waiting for responses: map[] Jan 13 07:21:55.864: INFO: reached 10.244.1.69 after 0/1 tries Jan 13 07:21:55.864: INFO: Going to retry 0 out of 2 pods.... [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:21:55.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4079" for this suite. • [SLOW TEST:26.837 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:27 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:30 should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":309,"completed":152,"skipped":2477,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:21:55.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:85 [It] deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 07:21:55.995: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Jan 13 07:22:01.001: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 13 07:22:01.002: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:79 Jan 13 07:22:07.078: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-9491 92fc293c-77b1-4ec7-a651-beb5f4fbc28f 500619 1 2021-01-13 07:22:01 +0000 UTC map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2021-01-13 07:22:01 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-01-13 07:22:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x4003e3a9b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-01-13 07:22:01 +0000 UTC,LastTransitionTime:2021-01-13 07:22:01 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-685c4f8568" has successfully progressed.,LastUpdateTime:2021-01-13 07:22:05 +0000 UTC,LastTransitionTime:2021-01-13 07:22:01 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jan 13 07:22:07.086: INFO: New ReplicaSet "test-cleanup-deployment-685c4f8568" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-685c4f8568 deployment-9491 ae643e20-7c48-4a3f-a876-59df99fe5f4c 500608 1 2021-01-13 07:22:01 +0000 UTC map[name:cleanup-pod pod-template-hash:685c4f8568] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 92fc293c-77b1-4ec7-a651-beb5f4fbc28f 0x4003e3ad37 0x4003e3ad38}] [] [{kube-controller-manager Update apps/v1 2021-01-13 07:22:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"92fc293c-77b1-4ec7-a651-beb5f4fbc28f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 685c4f8568,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:685c4f8568] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x4003e3adc8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 13 07:22:07.093: INFO: Pod "test-cleanup-deployment-685c4f8568-wj4zg" is available: &Pod{ObjectMeta:{test-cleanup-deployment-685c4f8568-wj4zg test-cleanup-deployment-685c4f8568- deployment-9491 79ef53a6-a024-4479-b9ad-9c2382ce4705 500607 0 2021-01-13 07:22:01 +0000 UTC map[name:cleanup-pod pod-template-hash:685c4f8568] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-685c4f8568 ae643e20-7c48-4a3f-a876-59df99fe5f4c 0x40049a7487 0x40049a7488}] [] [{kube-controller-manager Update v1 2021-01-13 07:22:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ae643e20-7c48-4a3f-a876-59df99fe5f4c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-13 07:22:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.71\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4l2d6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4l2d6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4l2d6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:22:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:22:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:22:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:22:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.1.71,StartTime:2021-01-13 07:22:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-13 07:22:04 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a,ContainerID:containerd://b1aac88343f6050151b0fdd14033fcbb25369d9444ece31aedc8d6afbcdf633b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.71,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:22:07.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9491" for this suite. • [SLOW TEST:11.228 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":309,"completed":153,"skipped":2493,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:22:07.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir 0777 on node default medium Jan 13 07:22:07.237: INFO: Waiting up to 5m0s for pod "pod-159799b2-946c-47e7-9fbb-181b93a31620" in namespace "emptydir-5379" to be "Succeeded or Failed" Jan 13 07:22:07.259: INFO: Pod "pod-159799b2-946c-47e7-9fbb-181b93a31620": Phase="Pending", Reason="", readiness=false. Elapsed: 21.54148ms Jan 13 07:22:09.266: INFO: Pod "pod-159799b2-946c-47e7-9fbb-181b93a31620": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029090977s Jan 13 07:22:11.275: INFO: Pod "pod-159799b2-946c-47e7-9fbb-181b93a31620": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037456882s STEP: Saw pod success Jan 13 07:22:11.275: INFO: Pod "pod-159799b2-946c-47e7-9fbb-181b93a31620" satisfied condition "Succeeded or Failed" Jan 13 07:22:11.281: INFO: Trying to get logs from node leguer-worker pod pod-159799b2-946c-47e7-9fbb-181b93a31620 container test-container: STEP: delete the pod Jan 13 07:22:11.313: INFO: Waiting for pod pod-159799b2-946c-47e7-9fbb-181b93a31620 to disappear Jan 13 07:22:11.411: INFO: Pod pod-159799b2-946c-47e7-9fbb-181b93a31620 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:22:11.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5379" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":154,"skipped":2510,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:22:11.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating projection with secret that has name secret-emptykey-test-7a346062-72b6-48b0-a71d-074264572edc [AfterEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:22:11.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9533" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":309,"completed":155,"skipped":2544,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:22:11.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Jan 13 07:22:18.165: INFO: Successfully updated pod "adopt-release-4pgmm" STEP: Checking that the Job readopts the Pod Jan 13 07:22:18.166: INFO: Waiting up to 15m0s for pod "adopt-release-4pgmm" in namespace "job-2981" to be "adopted" Jan 13 07:22:18.172: INFO: Pod "adopt-release-4pgmm": Phase="Running", Reason="", readiness=true. Elapsed: 5.989549ms Jan 13 07:22:20.179: INFO: Pod "adopt-release-4pgmm": Phase="Running", Reason="", readiness=true. Elapsed: 2.013586045s Jan 13 07:22:20.180: INFO: Pod "adopt-release-4pgmm" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Jan 13 07:22:20.701: INFO: Successfully updated pod "adopt-release-4pgmm" STEP: Checking that the Job releases the Pod Jan 13 07:22:20.701: INFO: Waiting up to 15m0s for pod "adopt-release-4pgmm" in namespace "job-2981" to be "released" Jan 13 07:22:20.724: INFO: Pod "adopt-release-4pgmm": Phase="Running", Reason="", readiness=true. Elapsed: 22.384951ms Jan 13 07:22:23.027: INFO: Pod "adopt-release-4pgmm": Phase="Running", Reason="", readiness=true. Elapsed: 2.325506352s Jan 13 07:22:23.027: INFO: Pod "adopt-release-4pgmm" satisfied condition "released" [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:22:23.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-2981" for this suite. • [SLOW TEST:11.685 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":309,"completed":156,"skipped":2558,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:22:23.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name configmap-test-upd-9596e666-1b0c-42bc-b641-8d3fc84a3ac8 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-9596e666-1b0c-42bc-b641-8d3fc84a3ac8 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:22:29.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7430" for this suite. • [SLOW TEST:6.688 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":309,"completed":157,"skipped":2620,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:22:29.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name projected-configmap-test-volume-71def058-c410-462f-85a7-f09951fa8ff5 STEP: Creating a pod to test consume configMaps Jan 13 07:22:30.050: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6024e060-93ec-4768-9b6f-b1e8e8e40901" in namespace "projected-5602" to be "Succeeded or Failed" Jan 13 07:22:30.076: INFO: Pod "pod-projected-configmaps-6024e060-93ec-4768-9b6f-b1e8e8e40901": Phase="Pending", Reason="", readiness=false. Elapsed: 25.812026ms Jan 13 07:22:32.082: INFO: Pod "pod-projected-configmaps-6024e060-93ec-4768-9b6f-b1e8e8e40901": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032410982s Jan 13 07:22:34.089: INFO: Pod "pod-projected-configmaps-6024e060-93ec-4768-9b6f-b1e8e8e40901": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039118026s STEP: Saw pod success Jan 13 07:22:34.089: INFO: Pod "pod-projected-configmaps-6024e060-93ec-4768-9b6f-b1e8e8e40901" satisfied condition "Succeeded or Failed" Jan 13 07:22:34.093: INFO: Trying to get logs from node leguer-worker2 pod pod-projected-configmaps-6024e060-93ec-4768-9b6f-b1e8e8e40901 container agnhost-container: STEP: delete the pod Jan 13 07:22:34.137: INFO: Waiting for pod pod-projected-configmaps-6024e060-93ec-4768-9b6f-b1e8e8e40901 to disappear Jan 13 07:22:34.147: INFO: Pod pod-projected-configmaps-6024e060-93ec-4768-9b6f-b1e8e8e40901 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:22:34.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5602" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":309,"completed":158,"skipped":2638,"failed":0} SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:22:34.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating pod pod-subpath-test-projected-49m9 STEP: Creating a pod to test atomic-volume-subpath Jan 13 07:22:34.373: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-49m9" in namespace "subpath-5931" to be "Succeeded or Failed" Jan 13 07:22:34.401: INFO: Pod "pod-subpath-test-projected-49m9": Phase="Pending", Reason="", readiness=false. Elapsed: 27.380985ms Jan 13 07:22:36.409: INFO: Pod "pod-subpath-test-projected-49m9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036112295s Jan 13 07:22:38.417: INFO: Pod "pod-subpath-test-projected-49m9": Phase="Running", Reason="", readiness=true. Elapsed: 4.044188994s Jan 13 07:22:40.425: INFO: Pod "pod-subpath-test-projected-49m9": Phase="Running", Reason="", readiness=true. Elapsed: 6.051492245s Jan 13 07:22:42.431: INFO: Pod "pod-subpath-test-projected-49m9": Phase="Running", Reason="", readiness=true. Elapsed: 8.057817548s Jan 13 07:22:44.439: INFO: Pod "pod-subpath-test-projected-49m9": Phase="Running", Reason="", readiness=true. Elapsed: 10.065410926s Jan 13 07:22:46.446: INFO: Pod "pod-subpath-test-projected-49m9": Phase="Running", Reason="", readiness=true. Elapsed: 12.072703146s Jan 13 07:22:48.453: INFO: Pod "pod-subpath-test-projected-49m9": Phase="Running", Reason="", readiness=true. Elapsed: 14.079847342s Jan 13 07:22:50.461: INFO: Pod "pod-subpath-test-projected-49m9": Phase="Running", Reason="", readiness=true. Elapsed: 16.087865855s Jan 13 07:22:52.469: INFO: Pod "pod-subpath-test-projected-49m9": Phase="Running", Reason="", readiness=true. Elapsed: 18.095962287s Jan 13 07:22:54.478: INFO: Pod "pod-subpath-test-projected-49m9": Phase="Running", Reason="", readiness=true. Elapsed: 20.105096505s Jan 13 07:22:56.487: INFO: Pod "pod-subpath-test-projected-49m9": Phase="Running", Reason="", readiness=true. Elapsed: 22.114099863s Jan 13 07:22:58.496: INFO: Pod "pod-subpath-test-projected-49m9": Phase="Running", Reason="", readiness=true. Elapsed: 24.122734821s Jan 13 07:23:00.506: INFO: Pod "pod-subpath-test-projected-49m9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.132538989s STEP: Saw pod success Jan 13 07:23:00.506: INFO: Pod "pod-subpath-test-projected-49m9" satisfied condition "Succeeded or Failed" Jan 13 07:23:00.512: INFO: Trying to get logs from node leguer-worker2 pod pod-subpath-test-projected-49m9 container test-container-subpath-projected-49m9: STEP: delete the pod Jan 13 07:23:00.612: INFO: Waiting for pod pod-subpath-test-projected-49m9 to disappear Jan 13 07:23:00.616: INFO: Pod pod-subpath-test-projected-49m9 no longer exists STEP: Deleting pod pod-subpath-test-projected-49m9 Jan 13 07:23:00.616: INFO: Deleting pod "pod-subpath-test-projected-49m9" in namespace "subpath-5931" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:23:00.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5931" for this suite. • [SLOW TEST:26.441 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":309,"completed":159,"skipped":2641,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:23:00.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name configmap-test-volume-8838f4fc-dd17-4ebb-810b-847251b18ce0 STEP: Creating a pod to test consume configMaps Jan 13 07:23:00.755: INFO: Waiting up to 5m0s for pod "pod-configmaps-2433a86f-55c3-441b-898e-2f0afc8db9de" in namespace "configmap-6843" to be "Succeeded or Failed" Jan 13 07:23:00.783: INFO: Pod "pod-configmaps-2433a86f-55c3-441b-898e-2f0afc8db9de": Phase="Pending", Reason="", readiness=false. Elapsed: 27.364289ms Jan 13 07:23:02.794: INFO: Pod "pod-configmaps-2433a86f-55c3-441b-898e-2f0afc8db9de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038104852s Jan 13 07:23:04.800: INFO: Pod "pod-configmaps-2433a86f-55c3-441b-898e-2f0afc8db9de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044552647s STEP: Saw pod success Jan 13 07:23:04.800: INFO: Pod "pod-configmaps-2433a86f-55c3-441b-898e-2f0afc8db9de" satisfied condition "Succeeded or Failed" Jan 13 07:23:04.805: INFO: Trying to get logs from node leguer-worker pod pod-configmaps-2433a86f-55c3-441b-898e-2f0afc8db9de container agnhost-container: STEP: delete the pod Jan 13 07:23:04.868: INFO: Waiting for pod pod-configmaps-2433a86f-55c3-441b-898e-2f0afc8db9de to disappear Jan 13 07:23:04.872: INFO: Pod pod-configmaps-2433a86f-55c3-441b-898e-2f0afc8db9de no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:23:04.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6843" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":309,"completed":160,"skipped":2688,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:23:04.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Jan 13 07:23:10.111: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:23:11.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-9628" for this suite. • [SLOW TEST:6.322 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":309,"completed":161,"skipped":2713,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:23:11.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 13 07:23:14.373: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 13 07:23:16.431: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746119394, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746119394, loc:(*time.Location)(0x7089440)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746119394, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746119394, loc:(*time.Location)(0x7089440)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 13 07:23:18.488: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746119394, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746119394, loc:(*time.Location)(0x7089440)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746119394, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746119394, loc:(*time.Location)(0x7089440)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 13 07:23:21.510: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:23:21.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4317" for this suite. STEP: Destroying namespace "webhook-4317-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:10.605 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":309,"completed":162,"skipped":2734,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:23:21.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Jan 13 07:23:21.919: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8b12a5f0-bea7-491e-89a6-865e40de2b77" in namespace "projected-6876" to be "Succeeded or Failed" Jan 13 07:23:21.976: INFO: Pod "downwardapi-volume-8b12a5f0-bea7-491e-89a6-865e40de2b77": Phase="Pending", Reason="", readiness=false. Elapsed: 56.496911ms Jan 13 07:23:23.983: INFO: Pod "downwardapi-volume-8b12a5f0-bea7-491e-89a6-865e40de2b77": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064146033s Jan 13 07:23:25.993: INFO: Pod "downwardapi-volume-8b12a5f0-bea7-491e-89a6-865e40de2b77": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.073766416s STEP: Saw pod success Jan 13 07:23:25.993: INFO: Pod "downwardapi-volume-8b12a5f0-bea7-491e-89a6-865e40de2b77" satisfied condition "Succeeded or Failed" Jan 13 07:23:25.998: INFO: Trying to get logs from node leguer-worker2 pod downwardapi-volume-8b12a5f0-bea7-491e-89a6-865e40de2b77 container client-container: STEP: delete the pod Jan 13 07:23:26.109: INFO: Waiting for pod downwardapi-volume-8b12a5f0-bea7-491e-89a6-865e40de2b77 to disappear Jan 13 07:23:26.119: INFO: Pod downwardapi-volume-8b12a5f0-bea7-491e-89a6-865e40de2b77 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:23:26.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6876" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":309,"completed":163,"skipped":2804,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:23:26.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 07:23:26.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Jan 13 07:23:26.953: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-01-13T07:23:26Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-01-13T07:23:26Z]] name:name1 resourceVersion:501170 uid:02f1e7ca-ca65-45cd-9b46-7d46e685c198] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Jan 13 07:23:36.964: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-01-13T07:23:36Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-01-13T07:23:36Z]] name:name2 resourceVersion:501212 uid:5888a0fb-1b3a-4b54-b78b-2d9cdcbcfe5c] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Jan 13 07:23:46.976: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-01-13T07:23:26Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-01-13T07:23:46Z]] name:name1 resourceVersion:501234 uid:02f1e7ca-ca65-45cd-9b46-7d46e685c198] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Jan 13 07:23:56.989: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-01-13T07:23:36Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-01-13T07:23:56Z]] name:name2 resourceVersion:501256 uid:5888a0fb-1b3a-4b54-b78b-2d9cdcbcfe5c] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Jan 13 07:24:07.002: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-01-13T07:23:26Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-01-13T07:23:46Z]] name:name1 resourceVersion:501277 uid:02f1e7ca-ca65-45cd-9b46-7d46e685c198] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Jan 13 07:24:17.012: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-01-13T07:23:36Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-01-13T07:23:56Z]] name:name2 resourceVersion:501297 uid:5888a0fb-1b3a-4b54-b78b-2d9cdcbcfe5c] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:24:27.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-5973" for this suite. • [SLOW TEST:61.411 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":309,"completed":164,"skipped":2829,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:24:27.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:129 [It] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jan 13 07:24:27.698: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:24:27.707: INFO: Number of nodes with available pods: 0 Jan 13 07:24:27.707: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:24:28.721: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:24:28.729: INFO: Number of nodes with available pods: 0 Jan 13 07:24:28.729: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:24:29.834: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:24:29.841: INFO: Number of nodes with available pods: 0 Jan 13 07:24:29.841: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:24:30.808: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:24:30.815: INFO: Number of nodes with available pods: 0 Jan 13 07:24:30.815: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:24:31.718: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:24:31.725: INFO: Number of nodes with available pods: 0 Jan 13 07:24:31.725: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:24:32.843: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:24:32.856: INFO: Number of nodes with available pods: 2 Jan 13 07:24:32.856: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jan 13 07:24:33.037: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:24:33.093: INFO: Number of nodes with available pods: 1 Jan 13 07:24:33.093: INFO: Node leguer-worker2 is running more than one daemon pod Jan 13 07:24:34.105: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:24:34.112: INFO: Number of nodes with available pods: 1 Jan 13 07:24:34.112: INFO: Node leguer-worker2 is running more than one daemon pod Jan 13 07:24:35.108: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:24:35.115: INFO: Number of nodes with available pods: 1 Jan 13 07:24:35.115: INFO: Node leguer-worker2 is running more than one daemon pod Jan 13 07:24:36.104: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:24:36.111: INFO: Number of nodes with available pods: 1 Jan 13 07:24:36.111: INFO: Node leguer-worker2 is running more than one daemon pod Jan 13 07:24:37.116: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:24:37.138: INFO: Number of nodes with available pods: 2 Jan 13 07:24:37.138: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:95 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3806, will wait for the garbage collector to delete the pods Jan 13 07:24:37.209: INFO: Deleting DaemonSet.extensions daemon-set took: 9.552741ms Jan 13 07:24:37.809: INFO: Terminating DaemonSet.extensions daemon-set pods took: 600.585126ms Jan 13 07:25:19.915: INFO: Number of nodes with available pods: 0 Jan 13 07:25:19.915: INFO: Number of running nodes: 0, number of available pods: 0 Jan 13 07:25:19.924: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"501495"},"items":null} Jan 13 07:25:19.928: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"501495"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:25:19.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3806" for this suite. • [SLOW TEST:52.409 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":309,"completed":165,"skipped":2849,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:25:19.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-api-machinery] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:25:20.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-3917" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":309,"completed":166,"skipped":2864,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:25:20.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name projected-configmap-test-volume-map-708bafad-9466-4e75-a936-4b8968a743b2 STEP: Creating a pod to test consume configMaps Jan 13 07:25:20.286: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6dbc8b06-ceab-4078-b23d-407ff2d98b20" in namespace "projected-5543" to be "Succeeded or Failed" Jan 13 07:25:20.291: INFO: Pod "pod-projected-configmaps-6dbc8b06-ceab-4078-b23d-407ff2d98b20": Phase="Pending", Reason="", readiness=false. Elapsed: 4.650363ms Jan 13 07:25:22.300: INFO: Pod "pod-projected-configmaps-6dbc8b06-ceab-4078-b23d-407ff2d98b20": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013393603s Jan 13 07:25:24.308: INFO: Pod "pod-projected-configmaps-6dbc8b06-ceab-4078-b23d-407ff2d98b20": Phase="Running", Reason="", readiness=true. Elapsed: 4.021684748s Jan 13 07:25:26.316: INFO: Pod "pod-projected-configmaps-6dbc8b06-ceab-4078-b23d-407ff2d98b20": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.029712472s STEP: Saw pod success Jan 13 07:25:26.316: INFO: Pod "pod-projected-configmaps-6dbc8b06-ceab-4078-b23d-407ff2d98b20" satisfied condition "Succeeded or Failed" Jan 13 07:25:26.322: INFO: Trying to get logs from node leguer-worker pod pod-projected-configmaps-6dbc8b06-ceab-4078-b23d-407ff2d98b20 container agnhost-container: STEP: delete the pod Jan 13 07:25:26.379: INFO: Waiting for pod pod-projected-configmaps-6dbc8b06-ceab-4078-b23d-407ff2d98b20 to disappear Jan 13 07:25:26.406: INFO: Pod pod-projected-configmaps-6dbc8b06-ceab-4078-b23d-407ff2d98b20 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:25:26.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5543" for this suite. • [SLOW TEST:6.291 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":309,"completed":167,"skipped":2870,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:25:26.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:25:43.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8084" for this suite. • [SLOW TEST:17.298 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":309,"completed":168,"skipped":2876,"failed":0} SSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:25:43.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap configmap-1247/configmap-test-a619c218-7911-4956-a107-aaf8bbe8d4a7 STEP: Creating a pod to test consume configMaps Jan 13 07:25:43.854: INFO: Waiting up to 5m0s for pod "pod-configmaps-f1906762-5327-4e4c-b37d-de76feb2ad1d" in namespace "configmap-1247" to be "Succeeded or Failed" Jan 13 07:25:43.874: INFO: Pod "pod-configmaps-f1906762-5327-4e4c-b37d-de76feb2ad1d": Phase="Pending", Reason="", readiness=false. Elapsed: 20.039635ms Jan 13 07:25:45.880: INFO: Pod "pod-configmaps-f1906762-5327-4e4c-b37d-de76feb2ad1d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026089037s Jan 13 07:25:47.960: INFO: Pod "pod-configmaps-f1906762-5327-4e4c-b37d-de76feb2ad1d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.105563353s STEP: Saw pod success Jan 13 07:25:47.960: INFO: Pod "pod-configmaps-f1906762-5327-4e4c-b37d-de76feb2ad1d" satisfied condition "Succeeded or Failed" Jan 13 07:25:47.964: INFO: Trying to get logs from node leguer-worker pod pod-configmaps-f1906762-5327-4e4c-b37d-de76feb2ad1d container env-test: STEP: delete the pod Jan 13 07:25:47.987: INFO: Waiting for pod pod-configmaps-f1906762-5327-4e4c-b37d-de76feb2ad1d to disappear Jan 13 07:25:47.992: INFO: Pod pod-configmaps-f1906762-5327-4e4c-b37d-de76feb2ad1d no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:25:47.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1247" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":309,"completed":169,"skipped":2879,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:25:48.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0113 07:26:28.317743 10 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Jan 13 07:27:30.344: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Jan 13 07:27:30.345: INFO: Deleting pod "simpletest.rc-2b4n7" in namespace "gc-7084" Jan 13 07:27:30.411: INFO: Deleting pod "simpletest.rc-574n6" in namespace "gc-7084" Jan 13 07:27:30.451: INFO: Deleting pod "simpletest.rc-9wmzv" in namespace "gc-7084" Jan 13 07:27:30.481: INFO: Deleting pod "simpletest.rc-dgcdl" in namespace "gc-7084" Jan 13 07:27:30.546: INFO: Deleting pod "simpletest.rc-hzvbg" in namespace "gc-7084" Jan 13 07:27:30.855: INFO: Deleting pod "simpletest.rc-ltqn7" in namespace "gc-7084" Jan 13 07:27:31.039: INFO: Deleting pod "simpletest.rc-m7fsk" in namespace "gc-7084" Jan 13 07:27:31.179: INFO: Deleting pod "simpletest.rc-mt5f9" in namespace "gc-7084" Jan 13 07:27:31.769: INFO: Deleting pod "simpletest.rc-qfhpt" in namespace "gc-7084" Jan 13 07:27:31.993: INFO: Deleting pod "simpletest.rc-tnbbj" in namespace "gc-7084" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:27:32.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7084" for this suite. • [SLOW TEST:104.439 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":309,"completed":170,"skipped":2886,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:27:32.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating the pod Jan 13 07:27:32.925: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:27:41.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-690" for this suite. • [SLOW TEST:8.872 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":309,"completed":171,"skipped":2928,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:27:41.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Jan 13 07:27:42.838: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Jan 13 07:27:44.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746119662, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746119662, loc:(*time.Location)(0x7089440)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746119662, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746119662, loc:(*time.Location)(0x7089440)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-7d6697c5b7\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 13 07:27:47.943: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 07:27:47.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:27:49.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-7980" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:8.043 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":309,"completed":172,"skipped":2939,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:27:49.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create deployment with httpd image Jan 13 07:27:49.553: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8630 create -f -' Jan 13 07:27:55.914: INFO: stderr: "" Jan 13 07:27:55.914: INFO: stdout: "deployment.apps/httpd-deployment created\n" STEP: verify diff finds difference between live and declared image Jan 13 07:27:55.915: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8630 diff -f -' Jan 13 07:28:00.303: INFO: rc: 1 Jan 13 07:28:00.306: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-8630 delete -f -' Jan 13 07:28:01.697: INFO: stderr: "" Jan 13 07:28:01.697: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:28:01.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8630" for this suite. • [SLOW TEST:12.354 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl diff /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:878 should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":309,"completed":173,"skipped":2953,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:28:01.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:29:33.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-7137" for this suite. STEP: Destroying namespace "nsdeletetest-1294" for this suite. Jan 13 07:29:33.384: INFO: Namespace nsdeletetest-1294 was already deleted STEP: Destroying namespace "nsdeletetest-30" for this suite. • [SLOW TEST:91.664 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":309,"completed":174,"skipped":2965,"failed":0} SSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:29:33.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 07:29:33.498: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-735e0cb4-dd5f-4c59-9e0c-7cfad570aaab" in namespace "security-context-test-1827" to be "Succeeded or Failed" Jan 13 07:29:33.507: INFO: Pod "busybox-readonly-false-735e0cb4-dd5f-4c59-9e0c-7cfad570aaab": Phase="Pending", Reason="", readiness=false. Elapsed: 9.005274ms Jan 13 07:29:35.515: INFO: Pod "busybox-readonly-false-735e0cb4-dd5f-4c59-9e0c-7cfad570aaab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016923616s Jan 13 07:29:37.523: INFO: Pod "busybox-readonly-false-735e0cb4-dd5f-4c59-9e0c-7cfad570aaab": Phase="Running", Reason="", readiness=true. Elapsed: 4.025251853s Jan 13 07:29:39.531: INFO: Pod "busybox-readonly-false-735e0cb4-dd5f-4c59-9e0c-7cfad570aaab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.032646271s Jan 13 07:29:39.531: INFO: Pod "busybox-readonly-false-735e0cb4-dd5f-4c59-9e0c-7cfad570aaab" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:29:39.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1827" for this suite. • [SLOW TEST:6.154 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 When creating a pod with readOnlyRootFilesystem /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:166 should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":309,"completed":175,"skipped":2970,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:29:39.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir 0666 on tmpfs Jan 13 07:29:39.700: INFO: Waiting up to 5m0s for pod "pod-222d40d6-e016-464c-bb7d-b78a10a12b88" in namespace "emptydir-717" to be "Succeeded or Failed" Jan 13 07:29:39.706: INFO: Pod "pod-222d40d6-e016-464c-bb7d-b78a10a12b88": Phase="Pending", Reason="", readiness=false. Elapsed: 5.677263ms Jan 13 07:29:41.714: INFO: Pod "pod-222d40d6-e016-464c-bb7d-b78a10a12b88": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013578646s Jan 13 07:29:43.721: INFO: Pod "pod-222d40d6-e016-464c-bb7d-b78a10a12b88": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02128259s STEP: Saw pod success Jan 13 07:29:43.722: INFO: Pod "pod-222d40d6-e016-464c-bb7d-b78a10a12b88" satisfied condition "Succeeded or Failed" Jan 13 07:29:43.728: INFO: Trying to get logs from node leguer-worker2 pod pod-222d40d6-e016-464c-bb7d-b78a10a12b88 container test-container: STEP: delete the pod Jan 13 07:29:43.929: INFO: Waiting for pod pod-222d40d6-e016-464c-bb7d-b78a10a12b88 to disappear Jan 13 07:29:43.938: INFO: Pod pod-222d40d6-e016-464c-bb7d-b78a10a12b88 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:29:43.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-717" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":176,"skipped":2977,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:29:43.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Jan 13 07:29:44.710: INFO: Pod name wrapped-volume-race-066b6e37-4ecf-4593-9c3f-87bfbba2889c: Found 0 pods out of 5 Jan 13 07:29:49.729: INFO: Pod name wrapped-volume-race-066b6e37-4ecf-4593-9c3f-87bfbba2889c: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-066b6e37-4ecf-4593-9c3f-87bfbba2889c in namespace emptydir-wrapper-3412, will wait for the garbage collector to delete the pods Jan 13 07:30:06.032: INFO: Deleting ReplicationController wrapped-volume-race-066b6e37-4ecf-4593-9c3f-87bfbba2889c took: 8.89124ms Jan 13 07:30:06.633: INFO: Terminating ReplicationController wrapped-volume-race-066b6e37-4ecf-4593-9c3f-87bfbba2889c pods took: 600.97107ms STEP: Creating RC which spawns configmap-volume pods Jan 13 07:30:30.502: INFO: Pod name wrapped-volume-race-8ca037eb-c67b-4508-855f-6eda9ed2ebd1: Found 0 pods out of 5 Jan 13 07:30:35.522: INFO: Pod name wrapped-volume-race-8ca037eb-c67b-4508-855f-6eda9ed2ebd1: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-8ca037eb-c67b-4508-855f-6eda9ed2ebd1 in namespace emptydir-wrapper-3412, will wait for the garbage collector to delete the pods Jan 13 07:30:51.662: INFO: Deleting ReplicationController wrapped-volume-race-8ca037eb-c67b-4508-855f-6eda9ed2ebd1 took: 24.527918ms Jan 13 07:30:52.263: INFO: Terminating ReplicationController wrapped-volume-race-8ca037eb-c67b-4508-855f-6eda9ed2ebd1 pods took: 600.822254ms STEP: Creating RC which spawns configmap-volume pods Jan 13 07:31:20.159: INFO: Pod name wrapped-volume-race-7c1610d4-f701-4e11-9480-6b34078a3064: Found 0 pods out of 5 Jan 13 07:31:25.177: INFO: Pod name wrapped-volume-race-7c1610d4-f701-4e11-9480-6b34078a3064: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-7c1610d4-f701-4e11-9480-6b34078a3064 in namespace emptydir-wrapper-3412, will wait for the garbage collector to delete the pods Jan 13 07:31:39.342: INFO: Deleting ReplicationController wrapped-volume-race-7c1610d4-f701-4e11-9480-6b34078a3064 took: 18.663523ms Jan 13 07:31:39.943: INFO: Terminating ReplicationController wrapped-volume-race-7c1610d4-f701-4e11-9480-6b34078a3064 pods took: 600.82362ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:32:31.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-3412" for this suite. • [SLOW TEST:167.079 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":309,"completed":177,"skipped":2982,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:32:31.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Jan 13 07:32:31.133: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:33:30.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6941" for this suite. • [SLOW TEST:59.126 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":309,"completed":178,"skipped":3074,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:33:30.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Jan 13 07:33:30.277: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3278738c-bf94-4076-9abe-96f506c9f3d9" in namespace "projected-45" to be "Succeeded or Failed" Jan 13 07:33:30.287: INFO: Pod "downwardapi-volume-3278738c-bf94-4076-9abe-96f506c9f3d9": Phase="Pending", Reason="", readiness=false. Elapsed: 9.838019ms Jan 13 07:33:32.295: INFO: Pod "downwardapi-volume-3278738c-bf94-4076-9abe-96f506c9f3d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017483361s Jan 13 07:33:34.302: INFO: Pod "downwardapi-volume-3278738c-bf94-4076-9abe-96f506c9f3d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024859183s STEP: Saw pod success Jan 13 07:33:34.303: INFO: Pod "downwardapi-volume-3278738c-bf94-4076-9abe-96f506c9f3d9" satisfied condition "Succeeded or Failed" Jan 13 07:33:34.309: INFO: Trying to get logs from node leguer-worker2 pod downwardapi-volume-3278738c-bf94-4076-9abe-96f506c9f3d9 container client-container: STEP: delete the pod Jan 13 07:33:34.561: INFO: Waiting for pod downwardapi-volume-3278738c-bf94-4076-9abe-96f506c9f3d9 to disappear Jan 13 07:33:34.663: INFO: Pod downwardapi-volume-3278738c-bf94-4076-9abe-96f506c9f3d9 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:33:34.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-45" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":309,"completed":179,"skipped":3087,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:33:34.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir 0777 on node default medium Jan 13 07:33:34.795: INFO: Waiting up to 5m0s for pod "pod-82579a2b-259f-49f5-a681-db8e74a6ab04" in namespace "emptydir-9277" to be "Succeeded or Failed" Jan 13 07:33:34.801: INFO: Pod "pod-82579a2b-259f-49f5-a681-db8e74a6ab04": Phase="Pending", Reason="", readiness=false. Elapsed: 6.435202ms Jan 13 07:33:36.809: INFO: Pod "pod-82579a2b-259f-49f5-a681-db8e74a6ab04": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013901361s Jan 13 07:33:38.817: INFO: Pod "pod-82579a2b-259f-49f5-a681-db8e74a6ab04": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021647266s STEP: Saw pod success Jan 13 07:33:38.817: INFO: Pod "pod-82579a2b-259f-49f5-a681-db8e74a6ab04" satisfied condition "Succeeded or Failed" Jan 13 07:33:38.822: INFO: Trying to get logs from node leguer-worker pod pod-82579a2b-259f-49f5-a681-db8e74a6ab04 container test-container: STEP: delete the pod Jan 13 07:33:38.881: INFO: Waiting for pod pod-82579a2b-259f-49f5-a681-db8e74a6ab04 to disappear Jan 13 07:33:38.885: INFO: Pod pod-82579a2b-259f-49f5-a681-db8e74a6ab04 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:33:38.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9277" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":180,"skipped":3092,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:33:38.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating secret with name s-test-opt-del-2ce1813f-18a4-4516-8560-718bbeabb853 STEP: Creating secret with name s-test-opt-upd-4394f7c8-5b20-4f79-933d-9bcef23127a0 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-2ce1813f-18a4-4516-8560-718bbeabb853 STEP: Updating secret s-test-opt-upd-4394f7c8-5b20-4f79-933d-9bcef23127a0 STEP: Creating secret with name s-test-opt-create-5ca38a36-d95a-442a-806a-d49b0ca95984 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:33:47.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-24" for this suite. • [SLOW TEST:8.342 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":309,"completed":181,"skipped":3121,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:33:47.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [k8s.io] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:33:47.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-3974" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":309,"completed":182,"skipped":3132,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:33:47.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 07:33:47.613: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:33:51.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8824" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":309,"completed":183,"skipped":3140,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:33:51.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name configmap-test-upd-62280d11-5c32-4a09-914a-83f6493486f6 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:33:57.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-553" for this suite. • [SLOW TEST:6.233 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":309,"completed":184,"skipped":3143,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:33:57.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating the pod Jan 13 07:33:58.030: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:34:06.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6593" for this suite. • [SLOW TEST:8.586 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":309,"completed":185,"skipped":3161,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:34:06.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating secret with name secret-test-b42b3b90-bda8-4e74-a94d-92a446c3c2d7 STEP: Creating a pod to test consume secrets Jan 13 07:34:06.741: INFO: Waiting up to 5m0s for pod "pod-secrets-8c0e2fc9-f882-45b2-9455-d827b466acba" in namespace "secrets-5311" to be "Succeeded or Failed" Jan 13 07:34:06.759: INFO: Pod "pod-secrets-8c0e2fc9-f882-45b2-9455-d827b466acba": Phase="Pending", Reason="", readiness=false. Elapsed: 18.291278ms Jan 13 07:34:08.765: INFO: Pod "pod-secrets-8c0e2fc9-f882-45b2-9455-d827b466acba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02464258s Jan 13 07:34:10.772: INFO: Pod "pod-secrets-8c0e2fc9-f882-45b2-9455-d827b466acba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030906141s STEP: Saw pod success Jan 13 07:34:10.772: INFO: Pod "pod-secrets-8c0e2fc9-f882-45b2-9455-d827b466acba" satisfied condition "Succeeded or Failed" Jan 13 07:34:10.778: INFO: Trying to get logs from node leguer-worker2 pod pod-secrets-8c0e2fc9-f882-45b2-9455-d827b466acba container secret-volume-test: STEP: delete the pod Jan 13 07:34:10.799: INFO: Waiting for pod pod-secrets-8c0e2fc9-f882-45b2-9455-d827b466acba to disappear Jan 13 07:34:10.838: INFO: Pod pod-secrets-8c0e2fc9-f882-45b2-9455-d827b466acba no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:34:10.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5311" for this suite. STEP: Destroying namespace "secret-namespace-82" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":309,"completed":186,"skipped":3193,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:34:10.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Jan 13 07:34:18.868: INFO: 10 pods remaining Jan 13 07:34:18.869: INFO: 10 pods has nil DeletionTimestamp Jan 13 07:34:18.869: INFO: Jan 13 07:34:19.641: INFO: 10 pods remaining Jan 13 07:34:19.641: INFO: 10 pods has nil DeletionTimestamp Jan 13 07:34:19.641: INFO: Jan 13 07:34:21.092: INFO: 9 pods remaining Jan 13 07:34:21.092: INFO: 0 pods has nil DeletionTimestamp Jan 13 07:34:21.092: INFO: Jan 13 07:34:21.333: INFO: 0 pods remaining Jan 13 07:34:21.333: INFO: 0 pods has nil DeletionTimestamp Jan 13 07:34:21.333: INFO: Jan 13 07:34:22.844: INFO: 0 pods remaining Jan 13 07:34:22.844: INFO: 0 pods has nil DeletionTimestamp Jan 13 07:34:22.844: INFO: STEP: Gathering metrics W0113 07:34:23.838098 10 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Jan 13 07:35:25.867: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:35:25.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1762" for this suite. • [SLOW TEST:75.025 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":309,"completed":187,"skipped":3207,"failed":0} SSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:35:25.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 07:35:26.015: INFO: Creating ReplicaSet my-hostname-basic-2bc27b83-4984-432d-bb75-9cc1f2f0aed1 Jan 13 07:35:26.033: INFO: Pod name my-hostname-basic-2bc27b83-4984-432d-bb75-9cc1f2f0aed1: Found 0 pods out of 1 Jan 13 07:35:31.044: INFO: Pod name my-hostname-basic-2bc27b83-4984-432d-bb75-9cc1f2f0aed1: Found 1 pods out of 1 Jan 13 07:35:31.044: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-2bc27b83-4984-432d-bb75-9cc1f2f0aed1" is running Jan 13 07:35:31.052: INFO: Pod "my-hostname-basic-2bc27b83-4984-432d-bb75-9cc1f2f0aed1-rj4gh" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-01-13 07:35:26 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-01-13 07:35:30 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-01-13 07:35:30 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-01-13 07:35:26 +0000 UTC Reason: Message:}]) Jan 13 07:35:31.058: INFO: Trying to dial the pod Jan 13 07:35:36.079: INFO: Controller my-hostname-basic-2bc27b83-4984-432d-bb75-9cc1f2f0aed1: Got expected result from replica 1 [my-hostname-basic-2bc27b83-4984-432d-bb75-9cc1f2f0aed1-rj4gh]: "my-hostname-basic-2bc27b83-4984-432d-bb75-9cc1f2f0aed1-rj4gh", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:35:36.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-9107" for this suite. • [SLOW TEST:10.208 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":309,"completed":188,"skipped":3214,"failed":0} [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:35:36.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jan 13 07:35:36.220: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-2265 run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod' Jan 13 07:35:37.580: INFO: stderr: "" Jan 13 07:35:37.580: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: replace the image in the pod with server-side dry-run Jan 13 07:35:37.581: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-2265 patch pod e2e-test-httpd-pod -p {"spec":{"containers":[{"name": "e2e-test-httpd-pod","image": "docker.io/library/busybox:1.29"}]}} --dry-run=server' Jan 13 07:35:40.074: INFO: stderr: "" Jan 13 07:35:40.075: INFO: stdout: "pod/e2e-test-httpd-pod patched\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/httpd:2.4.38-alpine Jan 13 07:35:40.079: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-2265 delete pods e2e-test-httpd-pod' Jan 13 07:35:50.108: INFO: stderr: "" Jan 13 07:35:50.108: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:35:50.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2265" for this suite. • [SLOW TEST:14.080 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl server-side dry-run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:909 should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":309,"completed":189,"skipped":3214,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:35:50.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Jan 13 07:35:50.257: INFO: Waiting up to 5m0s for pod "downwardapi-volume-663bda9b-6898-4fa5-a7bc-860691e8dbeb" in namespace "downward-api-1618" to be "Succeeded or Failed" Jan 13 07:35:50.309: INFO: Pod "downwardapi-volume-663bda9b-6898-4fa5-a7bc-860691e8dbeb": Phase="Pending", Reason="", readiness=false. Elapsed: 51.774818ms Jan 13 07:35:52.318: INFO: Pod "downwardapi-volume-663bda9b-6898-4fa5-a7bc-860691e8dbeb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060751662s Jan 13 07:35:54.326: INFO: Pod "downwardapi-volume-663bda9b-6898-4fa5-a7bc-860691e8dbeb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.068728039s STEP: Saw pod success Jan 13 07:35:54.326: INFO: Pod "downwardapi-volume-663bda9b-6898-4fa5-a7bc-860691e8dbeb" satisfied condition "Succeeded or Failed" Jan 13 07:35:54.331: INFO: Trying to get logs from node leguer-worker2 pod downwardapi-volume-663bda9b-6898-4fa5-a7bc-860691e8dbeb container client-container: STEP: delete the pod Jan 13 07:35:54.414: INFO: Waiting for pod downwardapi-volume-663bda9b-6898-4fa5-a7bc-860691e8dbeb to disappear Jan 13 07:35:54.419: INFO: Pod downwardapi-volume-663bda9b-6898-4fa5-a7bc-860691e8dbeb no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:35:54.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1618" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":190,"skipped":3291,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:35:54.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward api env vars Jan 13 07:35:54.602: INFO: Waiting up to 5m0s for pod "downward-api-b1afb546-ebe9-4cee-bc0d-90ef911ee255" in namespace "downward-api-5565" to be "Succeeded or Failed" Jan 13 07:35:54.610: INFO: Pod "downward-api-b1afb546-ebe9-4cee-bc0d-90ef911ee255": Phase="Pending", Reason="", readiness=false. Elapsed: 8.29536ms Jan 13 07:35:56.620: INFO: Pod "downward-api-b1afb546-ebe9-4cee-bc0d-90ef911ee255": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017831386s Jan 13 07:35:58.629: INFO: Pod "downward-api-b1afb546-ebe9-4cee-bc0d-90ef911ee255": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026665744s STEP: Saw pod success Jan 13 07:35:58.629: INFO: Pod "downward-api-b1afb546-ebe9-4cee-bc0d-90ef911ee255" satisfied condition "Succeeded or Failed" Jan 13 07:35:58.634: INFO: Trying to get logs from node leguer-worker2 pod downward-api-b1afb546-ebe9-4cee-bc0d-90ef911ee255 container dapi-container: STEP: delete the pod Jan 13 07:35:58.690: INFO: Waiting for pod downward-api-b1afb546-ebe9-4cee-bc0d-90ef911ee255 to disappear Jan 13 07:35:58.698: INFO: Pod downward-api-b1afb546-ebe9-4cee-bc0d-90ef911ee255 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:35:58.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5565" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":309,"completed":191,"skipped":3326,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:35:58.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Jan 13 07:35:58.834: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Jan 13 07:35:58.872: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Jan 13 07:35:58.874: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Jan 13 07:35:58.889: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Jan 13 07:35:58.889: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Jan 13 07:35:58.991: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Jan 13 07:35:58.991: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Jan 13 07:36:06.180: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:36:06.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-6004" for this suite. • [SLOW TEST:7.537 seconds] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":309,"completed":192,"skipped":3349,"failed":0} [sig-network] Services should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:36:06.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating service endpoint-test2 in namespace services-5089 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5089 to expose endpoints map[] Jan 13 07:36:06.390: INFO: successfully validated that service endpoint-test2 in namespace services-5089 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-5089 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5089 to expose endpoints map[pod1:[80]] Jan 13 07:36:10.462: INFO: successfully validated that service endpoint-test2 in namespace services-5089 exposes endpoints map[pod1:[80]] STEP: Creating pod pod2 in namespace services-5089 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5089 to expose endpoints map[pod1:[80] pod2:[80]] Jan 13 07:36:14.835: INFO: Unexpected endpoints: found map[1b84af41-86ce-430e-a94b-1ba6ca17eca6:[80]], expected map[pod1:[80] pod2:[80]], will retry Jan 13 07:36:15.537: INFO: successfully validated that service endpoint-test2 in namespace services-5089 exposes endpoints map[pod1:[80] pod2:[80]] STEP: Deleting pod pod1 in namespace services-5089 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5089 to expose endpoints map[pod2:[80]] Jan 13 07:36:15.603: INFO: successfully validated that service endpoint-test2 in namespace services-5089 exposes endpoints map[pod2:[80]] STEP: Deleting pod pod2 in namespace services-5089 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5089 to expose endpoints map[] Jan 13 07:36:15.692: INFO: successfully validated that service endpoint-test2 in namespace services-5089 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:36:16.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5089" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:9.919 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":309,"completed":193,"skipped":3349,"failed":0} SSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:36:16.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Jan 13 07:36:16.637: INFO: Waiting up to 1m0s for all nodes to be ready Jan 13 07:37:16.745: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Create pods that use 2/3 of node resources. Jan 13 07:37:16.787: INFO: Created pod: pod0-sched-preemption-low-priority Jan 13 07:37:16.842: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a high priority pod that has same requirements as that of lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:37:34.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-1003" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:78.824 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":309,"completed":194,"skipped":3356,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:37:35.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:299 [It] should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a replication controller Jan 13 07:37:35.166: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1723 create -f -' Jan 13 07:37:38.235: INFO: stderr: "" Jan 13 07:37:38.236: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 13 07:37:38.237: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1723 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 13 07:37:39.604: INFO: stderr: "" Jan 13 07:37:39.605: INFO: stdout: "update-demo-nautilus-7rhrc update-demo-nautilus-vrckt " Jan 13 07:37:39.605: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1723 get pods update-demo-nautilus-7rhrc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 13 07:37:40.930: INFO: stderr: "" Jan 13 07:37:40.930: INFO: stdout: "" Jan 13 07:37:40.931: INFO: update-demo-nautilus-7rhrc is created but not running Jan 13 07:37:45.931: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1723 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 13 07:37:47.263: INFO: stderr: "" Jan 13 07:37:47.263: INFO: stdout: "update-demo-nautilus-7rhrc update-demo-nautilus-vrckt " Jan 13 07:37:47.263: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1723 get pods update-demo-nautilus-7rhrc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 13 07:37:48.577: INFO: stderr: "" Jan 13 07:37:48.577: INFO: stdout: "true" Jan 13 07:37:48.577: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1723 get pods update-demo-nautilus-7rhrc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 13 07:37:49.904: INFO: stderr: "" Jan 13 07:37:49.904: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 13 07:37:49.905: INFO: validating pod update-demo-nautilus-7rhrc Jan 13 07:37:49.912: INFO: got data: { "image": "nautilus.jpg" } Jan 13 07:37:49.913: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 13 07:37:49.913: INFO: update-demo-nautilus-7rhrc is verified up and running Jan 13 07:37:49.914: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1723 get pods update-demo-nautilus-vrckt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 13 07:37:51.403: INFO: stderr: "" Jan 13 07:37:51.403: INFO: stdout: "true" Jan 13 07:37:51.404: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1723 get pods update-demo-nautilus-vrckt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 13 07:37:52.738: INFO: stderr: "" Jan 13 07:37:52.738: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 13 07:37:52.738: INFO: validating pod update-demo-nautilus-vrckt Jan 13 07:37:52.744: INFO: got data: { "image": "nautilus.jpg" } Jan 13 07:37:52.744: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 13 07:37:52.744: INFO: update-demo-nautilus-vrckt is verified up and running STEP: using delete to clean up resources Jan 13 07:37:52.745: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1723 delete --grace-period=0 --force -f -' Jan 13 07:37:54.447: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 13 07:37:54.448: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jan 13 07:37:54.449: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1723 get rc,svc -l name=update-demo --no-headers' Jan 13 07:37:59.362: INFO: stderr: "No resources found in kubectl-1723 namespace.\n" Jan 13 07:37:59.362: INFO: stdout: "" Jan 13 07:37:59.363: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-1723 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 13 07:38:00.750: INFO: stderr: "" Jan 13 07:38:00.751: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:38:00.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1723" for this suite. • [SLOW TEST:25.751 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:297 should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":309,"completed":195,"skipped":3379,"failed":0} SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:38:00.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating projection with secret that has name projected-secret-test-d7dd7388-8330-40ea-b03d-5172efc24722 STEP: Creating a pod to test consume secrets Jan 13 07:38:00.883: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-03d95bc4-24c9-4e90-8e76-77b595c8ddc8" in namespace "projected-2990" to be "Succeeded or Failed" Jan 13 07:38:00.902: INFO: Pod "pod-projected-secrets-03d95bc4-24c9-4e90-8e76-77b595c8ddc8": Phase="Pending", Reason="", readiness=false. Elapsed: 18.384873ms Jan 13 07:38:02.909: INFO: Pod "pod-projected-secrets-03d95bc4-24c9-4e90-8e76-77b595c8ddc8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025793999s Jan 13 07:38:04.918: INFO: Pod "pod-projected-secrets-03d95bc4-24c9-4e90-8e76-77b595c8ddc8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034261472s STEP: Saw pod success Jan 13 07:38:04.918: INFO: Pod "pod-projected-secrets-03d95bc4-24c9-4e90-8e76-77b595c8ddc8" satisfied condition "Succeeded or Failed" Jan 13 07:38:04.925: INFO: Trying to get logs from node leguer-worker2 pod pod-projected-secrets-03d95bc4-24c9-4e90-8e76-77b595c8ddc8 container projected-secret-volume-test: STEP: delete the pod Jan 13 07:38:05.005: INFO: Waiting for pod pod-projected-secrets-03d95bc4-24c9-4e90-8e76-77b595c8ddc8 to disappear Jan 13 07:38:05.011: INFO: Pod pod-projected-secrets-03d95bc4-24c9-4e90-8e76-77b595c8ddc8 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:38:05.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2990" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":196,"skipped":3381,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:38:05.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 07:38:05.110: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Jan 13 07:38:27.774: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2626 --namespace=crd-publish-openapi-2626 create -f -' Jan 13 07:38:33.651: INFO: stderr: "" Jan 13 07:38:33.652: INFO: stdout: "e2e-test-crd-publish-openapi-7189-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Jan 13 07:38:33.653: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2626 --namespace=crd-publish-openapi-2626 delete e2e-test-crd-publish-openapi-7189-crds test-foo' Jan 13 07:38:35.076: INFO: stderr: "" Jan 13 07:38:35.076: INFO: stdout: "e2e-test-crd-publish-openapi-7189-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Jan 13 07:38:35.077: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2626 --namespace=crd-publish-openapi-2626 apply -f -' Jan 13 07:38:38.787: INFO: stderr: "" Jan 13 07:38:38.787: INFO: stdout: "e2e-test-crd-publish-openapi-7189-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Jan 13 07:38:38.788: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2626 --namespace=crd-publish-openapi-2626 delete e2e-test-crd-publish-openapi-7189-crds test-foo' Jan 13 07:38:40.214: INFO: stderr: "" Jan 13 07:38:40.214: INFO: stdout: "e2e-test-crd-publish-openapi-7189-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Jan 13 07:38:40.215: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2626 --namespace=crd-publish-openapi-2626 create -f -' Jan 13 07:38:44.767: INFO: rc: 1 Jan 13 07:38:44.769: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2626 --namespace=crd-publish-openapi-2626 apply -f -' Jan 13 07:38:48.463: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Jan 13 07:38:48.464: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2626 --namespace=crd-publish-openapi-2626 create -f -' Jan 13 07:38:52.614: INFO: rc: 1 Jan 13 07:38:52.615: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2626 --namespace=crd-publish-openapi-2626 apply -f -' Jan 13 07:38:55.049: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Jan 13 07:38:55.050: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2626 explain e2e-test-crd-publish-openapi-7189-crds' Jan 13 07:38:58.373: INFO: stderr: "" Jan 13 07:38:58.373: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7189-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Jan 13 07:38:58.376: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2626 explain e2e-test-crd-publish-openapi-7189-crds.metadata' Jan 13 07:39:02.108: INFO: stderr: "" Jan 13 07:39:02.108: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7189-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Jan 13 07:39:02.115: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2626 explain e2e-test-crd-publish-openapi-7189-crds.spec' Jan 13 07:39:05.005: INFO: stderr: "" Jan 13 07:39:05.005: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7189-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Jan 13 07:39:05.006: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2626 explain e2e-test-crd-publish-openapi-7189-crds.spec.bars' Jan 13 07:39:07.310: INFO: stderr: "" Jan 13 07:39:07.310: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7189-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Jan 13 07:39:07.313: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2626 explain e2e-test-crd-publish-openapi-7189-crds.spec.bars2' Jan 13 07:39:11.094: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:39:33.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2626" for this suite. • [SLOW TEST:88.606 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":309,"completed":197,"skipped":3402,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:39:33.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:85 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 07:39:33.704: INFO: Creating deployment "test-recreate-deployment" Jan 13 07:39:33.711: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Jan 13 07:39:33.771: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Jan 13 07:39:35.784: INFO: Waiting deployment "test-recreate-deployment" to complete Jan 13 07:39:35.789: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746120373, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746120373, loc:(*time.Location)(0x7089440)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746120373, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746120373, loc:(*time.Location)(0x7089440)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-786dd7c454\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 13 07:39:37.796: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Jan 13 07:39:37.814: INFO: Updating deployment test-recreate-deployment Jan 13 07:39:37.815: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:79 Jan 13 07:39:38.527: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-9477 77f661b8-25c4-42b5-aeb8-e88fbb19db5e 505503 2 2021-01-13 07:39:33 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-01-13 07:39:37 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-01-13 07:39:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x4004712818 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2021-01-13 07:39:38 +0000 UTC,LastTransitionTime:2021-01-13 07:39:38 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-f79dd4667" is progressing.,LastUpdateTime:2021-01-13 07:39:38 +0000 UTC,LastTransitionTime:2021-01-13 07:39:33 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Jan 13 07:39:38.587: INFO: New ReplicaSet "test-recreate-deployment-f79dd4667" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-f79dd4667 deployment-9477 dc8a3e84-42ca-49e3-892b-00f87ffb173e 505500 1 2021-01-13 07:39:37 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 77f661b8-25c4-42b5-aeb8-e88fbb19db5e 0x4004712df0 0x4004712df1}] [] [{kube-controller-manager Update apps/v1 2021-01-13 07:39:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77f661b8-25c4-42b5-aeb8-e88fbb19db5e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: f79dd4667,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x4004712ea8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 13 07:39:38.587: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Jan 13 07:39:38.588: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-786dd7c454 deployment-9477 18d8adc7-66b8-4e7d-b4fe-0d0aee264c4d 505491 2 2021-01-13 07:39:33 +0000 UTC map[name:sample-pod-3 pod-template-hash:786dd7c454] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 77f661b8-25c4-42b5-aeb8-e88fbb19db5e 0x4004712c87 0x4004712c88}] [] [{kube-controller-manager Update apps/v1 2021-01-13 07:39:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77f661b8-25c4-42b5-aeb8-e88fbb19db5e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 786dd7c454,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:786dd7c454] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x4004712d38 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 13 07:39:38.602: INFO: Pod "test-recreate-deployment-f79dd4667-ztlb4" is not available: &Pod{ObjectMeta:{test-recreate-deployment-f79dd4667-ztlb4 test-recreate-deployment-f79dd4667- deployment-9477 33d18b4a-a48f-4aba-a323-09e8ded66661 505504 0 2021-01-13 07:39:38 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [{apps/v1 ReplicaSet test-recreate-deployment-f79dd4667 dc8a3e84-42ca-49e3-892b-00f87ffb173e 0x40047133f0 0x40047133f1}] [] [{kube-controller-manager Update v1 2021-01-13 07:39:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dc8a3e84-42ca-49e3-892b-00f87ffb173e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-13 07:39:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bhwct,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bhwct,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bhwct,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:39:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:39:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:39:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:39:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2021-01-13 07:39:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:39:38.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9477" for this suite. • [SLOW TEST:5.007 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":309,"completed":198,"skipped":3420,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:39:38.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:129 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 07:39:39.097: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Jan 13 07:39:39.110: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:39:39.125: INFO: Number of nodes with available pods: 0 Jan 13 07:39:39.125: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:39:40.136: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:39:40.141: INFO: Number of nodes with available pods: 0 Jan 13 07:39:40.142: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:39:41.139: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:39:41.370: INFO: Number of nodes with available pods: 0 Jan 13 07:39:41.370: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:39:42.136: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:39:42.142: INFO: Number of nodes with available pods: 0 Jan 13 07:39:42.142: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:39:43.138: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:39:43.160: INFO: Number of nodes with available pods: 0 Jan 13 07:39:43.160: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:39:44.137: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:39:44.144: INFO: Number of nodes with available pods: 2 Jan 13 07:39:44.144: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Jan 13 07:39:44.410: INFO: Wrong image for pod: daemon-set-bmptg. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 13 07:39:44.410: INFO: Wrong image for pod: daemon-set-gfd9t. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 13 07:39:44.457: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:39:45.466: INFO: Wrong image for pod: daemon-set-bmptg. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 13 07:39:45.466: INFO: Wrong image for pod: daemon-set-gfd9t. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 13 07:39:45.475: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:39:46.478: INFO: Wrong image for pod: daemon-set-bmptg. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 13 07:39:46.478: INFO: Pod daemon-set-bmptg is not available Jan 13 07:39:46.478: INFO: Wrong image for pod: daemon-set-gfd9t. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 13 07:39:46.497: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:39:47.467: INFO: Wrong image for pod: daemon-set-bmptg. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 13 07:39:47.467: INFO: Pod daemon-set-bmptg is not available Jan 13 07:39:47.467: INFO: Wrong image for pod: daemon-set-gfd9t. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 13 07:39:47.477: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:39:48.467: INFO: Wrong image for pod: daemon-set-bmptg. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 13 07:39:48.467: INFO: Pod daemon-set-bmptg is not available Jan 13 07:39:48.467: INFO: Wrong image for pod: daemon-set-gfd9t. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 13 07:39:48.478: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:39:49.467: INFO: Wrong image for pod: daemon-set-bmptg. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 13 07:39:49.467: INFO: Pod daemon-set-bmptg is not available Jan 13 07:39:49.467: INFO: Wrong image for pod: daemon-set-gfd9t. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 13 07:39:49.476: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:39:50.467: INFO: Wrong image for pod: daemon-set-bmptg. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 13 07:39:50.467: INFO: Pod daemon-set-bmptg is not available Jan 13 07:39:50.467: INFO: Wrong image for pod: daemon-set-gfd9t. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 13 07:39:50.478: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:39:51.472: INFO: Wrong image for pod: daemon-set-bmptg. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 13 07:39:51.472: INFO: Pod daemon-set-bmptg is not available Jan 13 07:39:51.472: INFO: Wrong image for pod: daemon-set-gfd9t. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 13 07:39:51.479: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:39:52.469: INFO: Wrong image for pod: daemon-set-bmptg. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 13 07:39:52.469: INFO: Pod daemon-set-bmptg is not available Jan 13 07:39:52.469: INFO: Wrong image for pod: daemon-set-gfd9t. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 13 07:39:52.477: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:39:53.467: INFO: Wrong image for pod: daemon-set-bmptg. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 13 07:39:53.467: INFO: Pod daemon-set-bmptg is not available Jan 13 07:39:53.467: INFO: Wrong image for pod: daemon-set-gfd9t. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 13 07:39:53.475: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:39:54.468: INFO: Wrong image for pod: daemon-set-bmptg. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 13 07:39:54.468: INFO: Pod daemon-set-bmptg is not available Jan 13 07:39:54.468: INFO: Wrong image for pod: daemon-set-gfd9t. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 13 07:39:54.478: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:39:55.468: INFO: Wrong image for pod: daemon-set-bmptg. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 13 07:39:55.468: INFO: Pod daemon-set-bmptg is not available Jan 13 07:39:55.468: INFO: Wrong image for pod: daemon-set-gfd9t. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 13 07:39:55.478: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:39:56.466: INFO: Wrong image for pod: daemon-set-bmptg. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 13 07:39:56.466: INFO: Pod daemon-set-bmptg is not available Jan 13 07:39:56.466: INFO: Wrong image for pod: daemon-set-gfd9t. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 13 07:39:56.477: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:39:57.466: INFO: Wrong image for pod: daemon-set-bmptg. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 13 07:39:57.466: INFO: Pod daemon-set-bmptg is not available Jan 13 07:39:57.466: INFO: Wrong image for pod: daemon-set-gfd9t. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 13 07:39:57.475: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:39:58.469: INFO: Wrong image for pod: daemon-set-bmptg. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 13 07:39:58.469: INFO: Pod daemon-set-bmptg is not available Jan 13 07:39:58.469: INFO: Wrong image for pod: daemon-set-gfd9t. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 13 07:39:58.479: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:39:59.467: INFO: Wrong image for pod: daemon-set-bmptg. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 13 07:39:59.468: INFO: Pod daemon-set-bmptg is not available Jan 13 07:39:59.468: INFO: Wrong image for pod: daemon-set-gfd9t. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 13 07:39:59.480: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:40:00.467: INFO: Pod daemon-set-cmx8v is not available Jan 13 07:40:00.467: INFO: Wrong image for pod: daemon-set-gfd9t. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 13 07:40:00.476: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:40:01.466: INFO: Pod daemon-set-cmx8v is not available Jan 13 07:40:01.466: INFO: Wrong image for pod: daemon-set-gfd9t. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 13 07:40:01.472: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:40:02.466: INFO: Pod daemon-set-cmx8v is not available Jan 13 07:40:02.466: INFO: Wrong image for pod: daemon-set-gfd9t. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 13 07:40:02.476: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:40:03.484: INFO: Wrong image for pod: daemon-set-gfd9t. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 13 07:40:03.561: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:40:04.467: INFO: Wrong image for pod: daemon-set-gfd9t. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 13 07:40:04.477: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:40:05.467: INFO: Wrong image for pod: daemon-set-gfd9t. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 13 07:40:05.468: INFO: Pod daemon-set-gfd9t is not available Jan 13 07:40:05.478: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:40:06.466: INFO: Wrong image for pod: daemon-set-gfd9t. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 13 07:40:06.466: INFO: Pod daemon-set-gfd9t is not available Jan 13 07:40:06.476: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:40:07.464: INFO: Wrong image for pod: daemon-set-gfd9t. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 13 07:40:07.464: INFO: Pod daemon-set-gfd9t is not available Jan 13 07:40:07.473: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:40:08.468: INFO: Wrong image for pod: daemon-set-gfd9t. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 13 07:40:08.468: INFO: Pod daemon-set-gfd9t is not available Jan 13 07:40:08.478: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:40:09.465: INFO: Wrong image for pod: daemon-set-gfd9t. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 13 07:40:09.465: INFO: Pod daemon-set-gfd9t is not available Jan 13 07:40:09.476: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:40:10.468: INFO: Wrong image for pod: daemon-set-gfd9t. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 13 07:40:10.468: INFO: Pod daemon-set-gfd9t is not available Jan 13 07:40:10.478: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:40:11.467: INFO: Wrong image for pod: daemon-set-gfd9t. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 13 07:40:11.467: INFO: Pod daemon-set-gfd9t is not available Jan 13 07:40:11.477: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:40:12.466: INFO: Wrong image for pod: daemon-set-gfd9t. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 13 07:40:12.466: INFO: Pod daemon-set-gfd9t is not available Jan 13 07:40:12.477: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:40:13.467: INFO: Wrong image for pod: daemon-set-gfd9t. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 13 07:40:13.467: INFO: Pod daemon-set-gfd9t is not available Jan 13 07:40:13.474: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:40:14.465: INFO: Wrong image for pod: daemon-set-gfd9t. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 13 07:40:14.465: INFO: Pod daemon-set-gfd9t is not available Jan 13 07:40:14.475: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:40:15.472: INFO: Wrong image for pod: daemon-set-gfd9t. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 13 07:40:15.472: INFO: Pod daemon-set-gfd9t is not available Jan 13 07:40:15.481: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:40:16.468: INFO: Wrong image for pod: daemon-set-gfd9t. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 13 07:40:16.468: INFO: Pod daemon-set-gfd9t is not available Jan 13 07:40:16.478: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:40:17.487: INFO: Wrong image for pod: daemon-set-gfd9t. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 13 07:40:17.487: INFO: Pod daemon-set-gfd9t is not available Jan 13 07:40:17.500: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:40:18.468: INFO: Wrong image for pod: daemon-set-gfd9t. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 13 07:40:18.468: INFO: Pod daemon-set-gfd9t is not available Jan 13 07:40:18.481: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:40:19.485: INFO: Wrong image for pod: daemon-set-gfd9t. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 13 07:40:19.485: INFO: Pod daemon-set-gfd9t is not available Jan 13 07:40:19.496: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:40:20.465: INFO: Wrong image for pod: daemon-set-gfd9t. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 13 07:40:20.465: INFO: Pod daemon-set-gfd9t is not available Jan 13 07:40:20.473: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:40:21.466: INFO: Wrong image for pod: daemon-set-gfd9t. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 13 07:40:21.466: INFO: Pod daemon-set-gfd9t is not available Jan 13 07:40:21.494: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:40:22.467: INFO: Wrong image for pod: daemon-set-gfd9t. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 13 07:40:22.467: INFO: Pod daemon-set-gfd9t is not available Jan 13 07:40:22.480: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:40:23.472: INFO: Wrong image for pod: daemon-set-gfd9t. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 13 07:40:23.473: INFO: Pod daemon-set-gfd9t is not available Jan 13 07:40:23.482: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:40:24.469: INFO: Wrong image for pod: daemon-set-gfd9t. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 13 07:40:24.469: INFO: Pod daemon-set-gfd9t is not available Jan 13 07:40:24.480: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:40:25.466: INFO: Wrong image for pod: daemon-set-gfd9t. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 13 07:40:25.466: INFO: Pod daemon-set-gfd9t is not available Jan 13 07:40:25.475: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:40:26.469: INFO: Wrong image for pod: daemon-set-gfd9t. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 13 07:40:26.469: INFO: Pod daemon-set-gfd9t is not available Jan 13 07:40:26.478: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:40:27.478: INFO: Wrong image for pod: daemon-set-gfd9t. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 13 07:40:27.479: INFO: Pod daemon-set-gfd9t is not available Jan 13 07:40:27.515: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:40:28.467: INFO: Wrong image for pod: daemon-set-gfd9t. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 13 07:40:28.467: INFO: Pod daemon-set-gfd9t is not available Jan 13 07:40:28.477: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:40:29.472: INFO: Wrong image for pod: daemon-set-gfd9t. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.21, got: docker.io/library/httpd:2.4.38-alpine. Jan 13 07:40:29.473: INFO: Pod daemon-set-gfd9t is not available Jan 13 07:40:29.489: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:40:30.466: INFO: Pod daemon-set-bdv76 is not available Jan 13 07:40:30.475: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Jan 13 07:40:30.485: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:40:30.490: INFO: Number of nodes with available pods: 1 Jan 13 07:40:30.491: INFO: Node leguer-worker2 is running more than one daemon pod Jan 13 07:40:31.639: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:40:31.647: INFO: Number of nodes with available pods: 1 Jan 13 07:40:31.647: INFO: Node leguer-worker2 is running more than one daemon pod Jan 13 07:40:32.505: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:40:32.513: INFO: Number of nodes with available pods: 1 Jan 13 07:40:32.513: INFO: Node leguer-worker2 is running more than one daemon pod Jan 13 07:40:33.499: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jan 13 07:40:33.505: INFO: Number of nodes with available pods: 2 Jan 13 07:40:33.505: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:95 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9442, will wait for the garbage collector to delete the pods Jan 13 07:40:33.595: INFO: Deleting DaemonSet.extensions daemon-set took: 8.247939ms Jan 13 07:40:34.196: INFO: Terminating DaemonSet.extensions daemon-set pods took: 600.67722ms Jan 13 07:41:29.903: INFO: Number of nodes with available pods: 0 Jan 13 07:41:29.903: INFO: Number of running nodes: 0, number of available pods: 0 Jan 13 07:41:29.909: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"505829"},"items":null} Jan 13 07:41:29.912: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"505829"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:41:29.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9442" for this suite. • [SLOW TEST:111.306 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":309,"completed":199,"skipped":3431,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:41:29.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:129 [It] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 07:41:30.059: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Jan 13 07:41:30.084: INFO: Number of nodes with available pods: 0 Jan 13 07:41:30.084: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Jan 13 07:41:30.155: INFO: Number of nodes with available pods: 0 Jan 13 07:41:30.156: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:41:31.164: INFO: Number of nodes with available pods: 0 Jan 13 07:41:31.164: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:41:32.177: INFO: Number of nodes with available pods: 0 Jan 13 07:41:32.177: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:41:33.165: INFO: Number of nodes with available pods: 0 Jan 13 07:41:33.165: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:41:34.164: INFO: Number of nodes with available pods: 1 Jan 13 07:41:34.164: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Jan 13 07:41:34.233: INFO: Number of nodes with available pods: 1 Jan 13 07:41:34.233: INFO: Number of running nodes: 0, number of available pods: 1 Jan 13 07:41:35.256: INFO: Number of nodes with available pods: 0 Jan 13 07:41:35.256: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Jan 13 07:41:35.318: INFO: Number of nodes with available pods: 0 Jan 13 07:41:35.318: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:41:36.325: INFO: Number of nodes with available pods: 0 Jan 13 07:41:36.325: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:41:37.326: INFO: Number of nodes with available pods: 0 Jan 13 07:41:37.326: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:41:38.326: INFO: Number of nodes with available pods: 0 Jan 13 07:41:38.326: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:41:39.326: INFO: Number of nodes with available pods: 0 Jan 13 07:41:39.326: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:41:40.326: INFO: Number of nodes with available pods: 0 Jan 13 07:41:40.326: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:41:41.326: INFO: Number of nodes with available pods: 0 Jan 13 07:41:41.327: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:41:42.326: INFO: Number of nodes with available pods: 0 Jan 13 07:41:42.327: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:41:43.326: INFO: Number of nodes with available pods: 0 Jan 13 07:41:43.326: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:41:44.327: INFO: Number of nodes with available pods: 0 Jan 13 07:41:44.327: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:41:45.327: INFO: Number of nodes with available pods: 0 Jan 13 07:41:45.327: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:41:46.327: INFO: Number of nodes with available pods: 0 Jan 13 07:41:46.327: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:41:47.328: INFO: Number of nodes with available pods: 0 Jan 13 07:41:47.328: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:41:48.328: INFO: Number of nodes with available pods: 0 Jan 13 07:41:48.328: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:41:49.326: INFO: Number of nodes with available pods: 0 Jan 13 07:41:49.326: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:41:50.326: INFO: Number of nodes with available pods: 0 Jan 13 07:41:50.326: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:41:51.326: INFO: Number of nodes with available pods: 0 Jan 13 07:41:51.326: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:41:52.326: INFO: Number of nodes with available pods: 0 Jan 13 07:41:52.326: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:41:53.327: INFO: Number of nodes with available pods: 0 Jan 13 07:41:53.327: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:41:54.328: INFO: Number of nodes with available pods: 0 Jan 13 07:41:54.328: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:41:55.326: INFO: Number of nodes with available pods: 0 Jan 13 07:41:55.326: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:41:56.326: INFO: Number of nodes with available pods: 0 Jan 13 07:41:56.326: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:41:57.325: INFO: Number of nodes with available pods: 0 Jan 13 07:41:57.325: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:41:58.324: INFO: Number of nodes with available pods: 0 Jan 13 07:41:58.324: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:41:59.327: INFO: Number of nodes with available pods: 0 Jan 13 07:41:59.327: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:42:00.326: INFO: Number of nodes with available pods: 0 Jan 13 07:42:00.326: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:42:01.326: INFO: Number of nodes with available pods: 0 Jan 13 07:42:01.326: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:42:02.326: INFO: Number of nodes with available pods: 0 Jan 13 07:42:02.326: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:42:03.326: INFO: Number of nodes with available pods: 0 Jan 13 07:42:03.326: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:42:04.327: INFO: Number of nodes with available pods: 0 Jan 13 07:42:04.327: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:42:05.326: INFO: Number of nodes with available pods: 0 Jan 13 07:42:05.326: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:42:06.325: INFO: Number of nodes with available pods: 0 Jan 13 07:42:06.325: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:42:07.326: INFO: Number of nodes with available pods: 0 Jan 13 07:42:07.326: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:42:08.326: INFO: Number of nodes with available pods: 0 Jan 13 07:42:08.326: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:42:09.326: INFO: Number of nodes with available pods: 0 Jan 13 07:42:09.326: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:42:10.325: INFO: Number of nodes with available pods: 0 Jan 13 07:42:10.326: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:42:11.327: INFO: Number of nodes with available pods: 0 Jan 13 07:42:11.327: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:42:12.324: INFO: Number of nodes with available pods: 0 Jan 13 07:42:12.324: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:42:13.326: INFO: Number of nodes with available pods: 0 Jan 13 07:42:13.326: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:42:14.327: INFO: Number of nodes with available pods: 0 Jan 13 07:42:14.328: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:42:15.327: INFO: Number of nodes with available pods: 0 Jan 13 07:42:15.327: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:42:16.325: INFO: Number of nodes with available pods: 0 Jan 13 07:42:16.325: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:42:17.326: INFO: Number of nodes with available pods: 0 Jan 13 07:42:17.326: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:42:18.327: INFO: Number of nodes with available pods: 0 Jan 13 07:42:18.328: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:42:19.327: INFO: Number of nodes with available pods: 0 Jan 13 07:42:19.327: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:42:20.325: INFO: Number of nodes with available pods: 0 Jan 13 07:42:20.325: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:42:21.325: INFO: Number of nodes with available pods: 0 Jan 13 07:42:21.325: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:42:22.327: INFO: Number of nodes with available pods: 0 Jan 13 07:42:22.327: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:42:23.326: INFO: Number of nodes with available pods: 0 Jan 13 07:42:23.326: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:42:24.328: INFO: Number of nodes with available pods: 0 Jan 13 07:42:24.328: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:42:25.326: INFO: Number of nodes with available pods: 0 Jan 13 07:42:25.327: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:42:26.327: INFO: Number of nodes with available pods: 0 Jan 13 07:42:26.327: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:42:27.327: INFO: Number of nodes with available pods: 0 Jan 13 07:42:27.327: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:42:28.326: INFO: Number of nodes with available pods: 0 Jan 13 07:42:28.326: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:42:29.326: INFO: Number of nodes with available pods: 0 Jan 13 07:42:29.326: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:42:30.324: INFO: Number of nodes with available pods: 0 Jan 13 07:42:30.324: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:42:31.325: INFO: Number of nodes with available pods: 0 Jan 13 07:42:31.325: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:42:32.325: INFO: Number of nodes with available pods: 0 Jan 13 07:42:32.325: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:42:33.327: INFO: Number of nodes with available pods: 0 Jan 13 07:42:33.327: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:42:34.326: INFO: Number of nodes with available pods: 0 Jan 13 07:42:34.326: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:42:35.326: INFO: Number of nodes with available pods: 0 Jan 13 07:42:35.326: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:42:36.327: INFO: Number of nodes with available pods: 0 Jan 13 07:42:36.327: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:42:37.327: INFO: Number of nodes with available pods: 0 Jan 13 07:42:37.327: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:42:38.326: INFO: Number of nodes with available pods: 0 Jan 13 07:42:38.326: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:42:39.328: INFO: Number of nodes with available pods: 0 Jan 13 07:42:39.328: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:42:40.326: INFO: Number of nodes with available pods: 0 Jan 13 07:42:40.327: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:42:41.340: INFO: Number of nodes with available pods: 0 Jan 13 07:42:41.340: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:42:42.701: INFO: Number of nodes with available pods: 0 Jan 13 07:42:42.701: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:42:43.324: INFO: Number of nodes with available pods: 0 Jan 13 07:42:43.324: INFO: Node leguer-worker is running more than one daemon pod Jan 13 07:42:44.327: INFO: Number of nodes with available pods: 1 Jan 13 07:42:44.327: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:95 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6161, will wait for the garbage collector to delete the pods Jan 13 07:42:44.401: INFO: Deleting DaemonSet.extensions daemon-set took: 8.668548ms Jan 13 07:42:45.001: INFO: Terminating DaemonSet.extensions daemon-set pods took: 600.842129ms Jan 13 07:42:50.208: INFO: Number of nodes with available pods: 0 Jan 13 07:42:50.208: INFO: Number of running nodes: 0, number of available pods: 0 Jan 13 07:42:50.213: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"506066"},"items":null} Jan 13 07:42:50.218: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"506066"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:42:50.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6161" for this suite. • [SLOW TEST:80.365 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":309,"completed":200,"skipped":3439,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:42:50.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 Jan 13 07:42:50.387: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 13 07:42:50.422: INFO: Waiting for terminating namespaces to be deleted... Jan 13 07:42:50.428: INFO: Logging pods the apiserver thinks is on node leguer-worker before test Jan 13 07:42:50.447: INFO: rally-a8f48c6d-3kmika18-pdtzv from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 07:42:50.447: INFO: Container rally-a8f48c6d-3kmika18 ready: true, restart count 0 Jan 13 07:42:50.447: INFO: rally-a8f48c6d-3kmika18-pllzg from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 07:42:50.447: INFO: Container rally-a8f48c6d-3kmika18 ready: true, restart count 0 Jan 13 07:42:50.447: INFO: rally-a8f48c6d-4cyi45kq-j5tzz from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 07:42:50.447: INFO: Container rally-a8f48c6d-4cyi45kq ready: true, restart count 0 Jan 13 07:42:50.447: INFO: rally-a8f48c6d-f3hls6a3-57dwc from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Jan 13 07:42:50.447: INFO: Container rally-a8f48c6d-f3hls6a3 ready: true, restart count 0 Jan 13 07:42:50.447: INFO: rally-a8f48c6d-1y3amfc0-lp8st from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Jan 13 07:42:50.447: INFO: Container rally-a8f48c6d-1y3amfc0 ready: true, restart count 0 Jan 13 07:42:50.447: INFO: rally-a8f48c6d-9pqmjehi-9zwjj from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 07:42:50.448: INFO: Container rally-a8f48c6d-9pqmjehi ready: true, restart count 0 Jan 13 07:42:50.448: INFO: chaos-controller-manager-69c479c674-s796v from default started at 2021-01-10 20:58:24 +0000 UTC (1 container statuses recorded) Jan 13 07:42:50.448: INFO: Container chaos-mesh ready: true, restart count 0 Jan 13 07:42:50.448: INFO: chaos-daemon-lv692 from default started at 2021-01-10 20:58:25 +0000 UTC (1 container statuses recorded) Jan 13 07:42:50.448: INFO: Container chaos-daemon ready: true, restart count 0 Jan 13 07:42:50.448: INFO: kindnet-psm25 from kube-system started at 2021-01-10 17:38:10 +0000 UTC (1 container statuses recorded) Jan 13 07:42:50.448: INFO: Container kindnet-cni ready: true, restart count 0 Jan 13 07:42:50.448: INFO: kube-proxy-bmbcs from kube-system started at 2021-01-10 17:38:10 +0000 UTC (1 container statuses recorded) Jan 13 07:42:50.448: INFO: Container kube-proxy ready: true, restart count 0 Jan 13 07:42:50.448: INFO: Logging pods the apiserver thinks is on node leguer-worker2 before test Jan 13 07:42:50.460: INFO: rally-a8f48c6d-4cyi45kq-knr4r from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 07:42:50.460: INFO: Container rally-a8f48c6d-4cyi45kq ready: true, restart count 0 Jan 13 07:42:50.461: INFO: rally-a8f48c6d-f3hls6a3-dwt8n from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Jan 13 07:42:50.461: INFO: Container rally-a8f48c6d-f3hls6a3 ready: true, restart count 0 Jan 13 07:42:50.461: INFO: rally-a8f48c6d-1y3amfc0-hh9qk from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Jan 13 07:42:50.461: INFO: Container rally-a8f48c6d-1y3amfc0 ready: true, restart count 0 Jan 13 07:42:50.461: INFO: rally-a8f48c6d-9pqmjehi-85slb from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 07:42:50.461: INFO: Container rally-a8f48c6d-9pqmjehi ready: true, restart count 0 Jan 13 07:42:50.461: INFO: rally-a8f48c6d-vnukxqu0-llj24 from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 07:42:50.461: INFO: Container rally-a8f48c6d-vnukxqu0 ready: true, restart count 0 Jan 13 07:42:50.461: INFO: rally-a8f48c6d-vnukxqu0-v85kr from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 07:42:50.461: INFO: Container rally-a8f48c6d-vnukxqu0 ready: true, restart count 0 Jan 13 07:42:50.461: INFO: chaos-daemon-ffkg7 from default started at 2021-01-10 20:58:25 +0000 UTC (1 container statuses recorded) Jan 13 07:42:50.461: INFO: Container chaos-daemon ready: true, restart count 0 Jan 13 07:42:50.461: INFO: kindnet-8wggd from kube-system started at 2021-01-10 17:38:10 +0000 UTC (1 container statuses recorded) Jan 13 07:42:50.461: INFO: Container kindnet-cni ready: true, restart count 0 Jan 13 07:42:50.461: INFO: kube-proxy-29gxg from kube-system started at 2021-01-10 17:38:09 +0000 UTC (1 container statuses recorded) Jan 13 07:42:50.461: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-93e134cd-de4e-4b4b-85fc-e8ff275737ed 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-93e134cd-de4e-4b4b-85fc-e8ff275737ed off the node leguer-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-93e134cd-de4e-4b4b-85fc-e8ff275737ed [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:43:00.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6327" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 • [SLOW TEST:10.358 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":309,"completed":201,"skipped":3478,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:43:00.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating pod busybox-53d1b73b-e6d1-4115-9cce-9c9db8ee8d11 in namespace container-probe-1612 Jan 13 07:43:04.856: INFO: Started pod busybox-53d1b73b-e6d1-4115-9cce-9c9db8ee8d11 in namespace container-probe-1612 STEP: checking the pod's current state and verifying that restartCount is present Jan 13 07:43:04.860: INFO: Initial restart count of pod busybox-53d1b73b-e6d1-4115-9cce-9c9db8ee8d11 is 0 Jan 13 07:43:53.048: INFO: Restart count of pod container-probe-1612/busybox-53d1b73b-e6d1-4115-9cce-9c9db8ee8d11 is now 1 (48.187658769s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:43:53.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1612" for this suite. • [SLOW TEST:52.456 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":309,"completed":202,"skipped":3493,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:43:53.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Jan 13 07:43:53.276: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5345 b6e46d4d-94d5-4fb7-af92-d7a19ebb7922 506282 0 2021-01-13 07:43:53 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-01-13 07:43:53 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jan 13 07:43:53.277: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5345 b6e46d4d-94d5-4fb7-af92-d7a19ebb7922 506282 0 2021-01-13 07:43:53 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-01-13 07:43:53 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Jan 13 07:44:03.295: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5345 b6e46d4d-94d5-4fb7-af92-d7a19ebb7922 506315 0 2021-01-13 07:43:53 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-01-13 07:44:03 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 13 07:44:03.297: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5345 b6e46d4d-94d5-4fb7-af92-d7a19ebb7922 506315 0 2021-01-13 07:43:53 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-01-13 07:44:03 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Jan 13 07:44:13.314: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5345 b6e46d4d-94d5-4fb7-af92-d7a19ebb7922 506335 0 2021-01-13 07:43:53 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-01-13 07:44:03 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 13 07:44:13.315: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5345 b6e46d4d-94d5-4fb7-af92-d7a19ebb7922 506335 0 2021-01-13 07:43:53 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-01-13 07:44:03 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Jan 13 07:44:23.327: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5345 b6e46d4d-94d5-4fb7-af92-d7a19ebb7922 506355 0 2021-01-13 07:43:53 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-01-13 07:44:03 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 13 07:44:23.328: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5345 b6e46d4d-94d5-4fb7-af92-d7a19ebb7922 506355 0 2021-01-13 07:43:53 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-01-13 07:44:03 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Jan 13 07:44:33.344: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-5345 cf85dd9f-4b28-4171-97d1-3c6dcebeb185 506375 0 2021-01-13 07:44:33 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-01-13 07:44:33 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jan 13 07:44:33.344: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-5345 cf85dd9f-4b28-4171-97d1-3c6dcebeb185 506375 0 2021-01-13 07:44:33 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-01-13 07:44:33 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Jan 13 07:44:43.356: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-5345 cf85dd9f-4b28-4171-97d1-3c6dcebeb185 506395 0 2021-01-13 07:44:33 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-01-13 07:44:33 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jan 13 07:44:43.357: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-5345 cf85dd9f-4b28-4171-97d1-3c6dcebeb185 506395 0 2021-01-13 07:44:33 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-01-13 07:44:33 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:44:53.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5345" for this suite. • [SLOW TEST:60.236 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":309,"completed":203,"skipped":3528,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:44:53.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1314 STEP: creating the pod Jan 13 07:44:53.471: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-4247 create -f -' Jan 13 07:44:56.138: INFO: stderr: "" Jan 13 07:44:56.138: INFO: stdout: "pod/pause created\n" Jan 13 07:44:56.138: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Jan 13 07:44:56.138: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-4247" to be "running and ready" Jan 13 07:44:56.194: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 56.137625ms Jan 13 07:44:58.204: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06552265s Jan 13 07:45:00.222: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.0832309s Jan 13 07:45:00.222: INFO: Pod "pause" satisfied condition "running and ready" Jan 13 07:45:00.222: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: adding the label testing-label with value testing-label-value to a pod Jan 13 07:45:00.223: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-4247 label pods pause testing-label=testing-label-value' Jan 13 07:45:01.600: INFO: stderr: "" Jan 13 07:45:01.600: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Jan 13 07:45:01.600: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-4247 get pod pause -L testing-label' Jan 13 07:45:02.883: INFO: stderr: "" Jan 13 07:45:02.883: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 6s testing-label-value\n" STEP: removing the label testing-label of a pod Jan 13 07:45:02.884: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-4247 label pods pause testing-label-' Jan 13 07:45:04.173: INFO: stderr: "" Jan 13 07:45:04.173: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Jan 13 07:45:04.174: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-4247 get pod pause -L testing-label' Jan 13 07:45:05.565: INFO: stderr: "" Jan 13 07:45:05.565: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 9s \n" [AfterEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1320 STEP: using delete to clean up resources Jan 13 07:45:05.566: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-4247 delete --grace-period=0 --force -f -' Jan 13 07:45:07.034: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 13 07:45:07.034: INFO: stdout: "pod \"pause\" force deleted\n" Jan 13 07:45:07.035: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-4247 get rc,svc -l name=pause --no-headers' Jan 13 07:45:08.356: INFO: stderr: "No resources found in kubectl-4247 namespace.\n" Jan 13 07:45:08.356: INFO: stdout: "" Jan 13 07:45:08.357: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-4247 get pods -l name=pause -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 13 07:45:09.635: INFO: stderr: "" Jan 13 07:45:09.636: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:45:09.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4247" for this suite. • [SLOW TEST:16.276 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1312 should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":309,"completed":204,"skipped":3529,"failed":0} SS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:45:09.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward api env vars Jan 13 07:45:09.766: INFO: Waiting up to 5m0s for pod "downward-api-3ed818b6-6140-47e7-8052-177551e5c04e" in namespace "downward-api-561" to be "Succeeded or Failed" Jan 13 07:45:09.786: INFO: Pod "downward-api-3ed818b6-6140-47e7-8052-177551e5c04e": Phase="Pending", Reason="", readiness=false. Elapsed: 19.691114ms Jan 13 07:45:11.795: INFO: Pod "downward-api-3ed818b6-6140-47e7-8052-177551e5c04e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027739874s Jan 13 07:45:13.802: INFO: Pod "downward-api-3ed818b6-6140-47e7-8052-177551e5c04e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03538333s STEP: Saw pod success Jan 13 07:45:13.802: INFO: Pod "downward-api-3ed818b6-6140-47e7-8052-177551e5c04e" satisfied condition "Succeeded or Failed" Jan 13 07:45:13.808: INFO: Trying to get logs from node leguer-worker pod downward-api-3ed818b6-6140-47e7-8052-177551e5c04e container dapi-container: STEP: delete the pod Jan 13 07:45:13.874: INFO: Waiting for pod downward-api-3ed818b6-6140-47e7-8052-177551e5c04e to disappear Jan 13 07:45:13.879: INFO: Pod downward-api-3ed818b6-6140-47e7-8052-177551e5c04e no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:45:13.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-561" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":309,"completed":205,"skipped":3531,"failed":0} ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:45:13.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 07:45:14.024: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-e764d346-d756-4a4c-aa24-854222ae5eec" in namespace "security-context-test-8205" to be "Succeeded or Failed" Jan 13 07:45:14.030: INFO: Pod "alpine-nnp-false-e764d346-d756-4a4c-aa24-854222ae5eec": Phase="Pending", Reason="", readiness=false. Elapsed: 5.929797ms Jan 13 07:45:19.944: INFO: Pod "alpine-nnp-false-e764d346-d756-4a4c-aa24-854222ae5eec": Phase="Pending", Reason="", readiness=false. Elapsed: 5.920014418s Jan 13 07:45:21.982: INFO: Pod "alpine-nnp-false-e764d346-d756-4a4c-aa24-854222ae5eec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.958237477s Jan 13 07:45:21.983: INFO: Pod "alpine-nnp-false-e764d346-d756-4a4c-aa24-854222ae5eec" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:45:22.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8205" for this suite. • [SLOW TEST:8.125 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 when creating containers with AllowPrivilegeEscalation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":206,"skipped":3531,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:45:22.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating service in namespace services-4909 Jan 13 07:45:24.455: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-4909 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Jan 13 07:45:26.056: INFO: stderr: "I0113 07:45:25.940499 2141 log.go:181] (0x40000d4c60) (0x4000ce81e0) Create stream\nI0113 07:45:25.947525 2141 log.go:181] (0x40000d4c60) (0x4000ce81e0) Stream added, broadcasting: 1\nI0113 07:45:25.961197 2141 log.go:181] (0x40000d4c60) Reply frame received for 1\nI0113 07:45:25.962438 2141 log.go:181] (0x40000d4c60) (0x4000ce8280) Create stream\nI0113 07:45:25.962606 2141 log.go:181] (0x40000d4c60) (0x4000ce8280) Stream added, broadcasting: 3\nI0113 07:45:25.964785 2141 log.go:181] (0x40000d4c60) Reply frame received for 3\nI0113 07:45:25.965309 2141 log.go:181] (0x40000d4c60) (0x4000ca68c0) Create stream\nI0113 07:45:25.965410 2141 log.go:181] (0x40000d4c60) (0x4000ca68c0) Stream added, broadcasting: 5\nI0113 07:45:25.966744 2141 log.go:181] (0x40000d4c60) Reply frame received for 5\nI0113 07:45:26.032714 2141 log.go:181] (0x40000d4c60) Data frame received for 5\nI0113 07:45:26.033139 2141 log.go:181] (0x4000ca68c0) (5) Data frame handling\nI0113 07:45:26.033863 2141 log.go:181] (0x4000ca68c0) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0113 07:45:26.035838 2141 log.go:181] (0x40000d4c60) Data frame received for 3\nI0113 07:45:26.035993 2141 log.go:181] (0x4000ce8280) (3) Data frame handling\nI0113 07:45:26.036162 2141 log.go:181] (0x4000ce8280) (3) Data frame sent\nI0113 07:45:26.036961 2141 log.go:181] (0x40000d4c60) Data frame received for 5\nI0113 07:45:26.037072 2141 log.go:181] (0x4000ca68c0) (5) Data frame handling\nI0113 07:45:26.037299 2141 log.go:181] (0x40000d4c60) Data frame received for 3\nI0113 07:45:26.037493 2141 log.go:181] (0x4000ce8280) (3) Data frame handling\nI0113 07:45:26.039852 2141 log.go:181] (0x40000d4c60) Data frame received for 1\nI0113 07:45:26.040038 2141 log.go:181] (0x4000ce81e0) (1) Data frame handling\nI0113 07:45:26.040274 2141 log.go:181] (0x4000ce81e0) (1) Data frame sent\nI0113 07:45:26.041584 2141 log.go:181] (0x40000d4c60) (0x4000ce81e0) Stream removed, broadcasting: 1\nI0113 07:45:26.045007 2141 log.go:181] (0x40000d4c60) Go away received\nI0113 07:45:26.047803 2141 log.go:181] (0x40000d4c60) (0x4000ce81e0) Stream removed, broadcasting: 1\nI0113 07:45:26.048178 2141 log.go:181] (0x40000d4c60) (0x4000ce8280) Stream removed, broadcasting: 3\nI0113 07:45:26.048624 2141 log.go:181] (0x40000d4c60) (0x4000ca68c0) Stream removed, broadcasting: 5\n" Jan 13 07:45:26.057: INFO: stdout: "iptables" Jan 13 07:45:26.057: INFO: proxyMode: iptables Jan 13 07:45:26.198: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jan 13 07:45:26.203: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-4909 STEP: creating replication controller affinity-nodeport-timeout in namespace services-4909 I0113 07:45:26.830407 10 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-4909, replica count: 3 I0113 07:45:29.881947 10 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0113 07:45:32.888224 10 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0113 07:45:35.889645 10 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 13 07:45:35.912: INFO: Creating new exec pod Jan 13 07:45:40.972: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-4909 exec execpod-affinityw8wrp -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80' Jan 13 07:45:42.540: INFO: stderr: "I0113 07:45:42.438848 2161 log.go:181] (0x40001b3130) (0x40005463c0) Create stream\nI0113 07:45:42.441228 2161 log.go:181] (0x40001b3130) (0x40005463c0) Stream added, broadcasting: 1\nI0113 07:45:42.452985 2161 log.go:181] (0x40001b3130) Reply frame received for 1\nI0113 07:45:42.454312 2161 log.go:181] (0x40001b3130) (0x400075e000) Create stream\nI0113 07:45:42.454436 2161 log.go:181] (0x40001b3130) (0x400075e000) Stream added, broadcasting: 3\nI0113 07:45:42.456364 2161 log.go:181] (0x40001b3130) Reply frame received for 3\nI0113 07:45:42.456646 2161 log.go:181] (0x40001b3130) (0x4000546500) Create stream\nI0113 07:45:42.456709 2161 log.go:181] (0x40001b3130) (0x4000546500) Stream added, broadcasting: 5\nI0113 07:45:42.458153 2161 log.go:181] (0x40001b3130) Reply frame received for 5\nI0113 07:45:42.520610 2161 log.go:181] (0x40001b3130) Data frame received for 5\nI0113 07:45:42.521157 2161 log.go:181] (0x4000546500) (5) Data frame handling\nI0113 07:45:42.521388 2161 log.go:181] (0x40001b3130) Data frame received for 3\nI0113 07:45:42.521521 2161 log.go:181] (0x400075e000) (3) Data frame handling\nI0113 07:45:42.521972 2161 log.go:181] (0x4000546500) (5) Data frame sent\nI0113 07:45:42.522244 2161 log.go:181] (0x40001b3130) Data frame received for 5\nI0113 07:45:42.522306 2161 log.go:181] (0x4000546500) (5) Data frame handling\nI0113 07:45:42.522473 2161 log.go:181] (0x40001b3130) Data frame received for 1\nI0113 07:45:42.522602 2161 log.go:181] (0x40005463c0) (1) Data frame handling\nI0113 07:45:42.522711 2161 log.go:181] (0x40005463c0) (1) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\nI0113 07:45:42.524733 2161 log.go:181] (0x4000546500) (5) Data frame sent\nI0113 07:45:42.524802 2161 log.go:181] (0x40001b3130) Data frame received for 5\nI0113 07:45:42.524923 2161 log.go:181] (0x4000546500) (5) Data frame handling\nI0113 07:45:42.525919 2161 log.go:181] (0x40001b3130) (0x40005463c0) Stream removed, broadcasting: 1\nI0113 07:45:42.528112 2161 log.go:181] (0x40001b3130) Go away received\nI0113 07:45:42.532285 2161 log.go:181] (0x40001b3130) (0x40005463c0) Stream removed, broadcasting: 1\nI0113 07:45:42.532600 2161 log.go:181] (0x40001b3130) (0x400075e000) Stream removed, broadcasting: 3\nI0113 07:45:42.532810 2161 log.go:181] (0x40001b3130) (0x4000546500) Stream removed, broadcasting: 5\n" Jan 13 07:45:42.541: INFO: stdout: "" Jan 13 07:45:42.546: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-4909 exec execpod-affinityw8wrp -- /bin/sh -x -c nc -zv -t -w 2 10.96.32.139 80' Jan 13 07:45:44.169: INFO: stderr: "I0113 07:45:44.018835 2182 log.go:181] (0x40005e7e40) (0x40005e2aa0) Create stream\nI0113 07:45:44.024218 2182 log.go:181] (0x40005e7e40) (0x40005e2aa0) Stream added, broadcasting: 1\nI0113 07:45:44.046407 2182 log.go:181] (0x40005e7e40) Reply frame received for 1\nI0113 07:45:44.047063 2182 log.go:181] (0x40005e7e40) (0x4000bd4000) Create stream\nI0113 07:45:44.047131 2182 log.go:181] (0x40005e7e40) (0x4000bd4000) Stream added, broadcasting: 3\nI0113 07:45:44.048599 2182 log.go:181] (0x40005e7e40) Reply frame received for 3\nI0113 07:45:44.049011 2182 log.go:181] (0x40005e7e40) (0x40007c0000) Create stream\nI0113 07:45:44.049080 2182 log.go:181] (0x40005e7e40) (0x40007c0000) Stream added, broadcasting: 5\nI0113 07:45:44.049945 2182 log.go:181] (0x40005e7e40) Reply frame received for 5\nI0113 07:45:44.147069 2182 log.go:181] (0x40005e7e40) Data frame received for 3\nI0113 07:45:44.147371 2182 log.go:181] (0x4000bd4000) (3) Data frame handling\nI0113 07:45:44.147820 2182 log.go:181] (0x40005e7e40) Data frame received for 5\nI0113 07:45:44.148062 2182 log.go:181] (0x40007c0000) (5) Data frame handling\nI0113 07:45:44.148346 2182 log.go:181] (0x40005e7e40) Data frame received for 1\nI0113 07:45:44.148526 2182 log.go:181] (0x40005e2aa0) (1) Data frame handling\n+ nc -zv -t -w 2 10.96.32.139 80\nConnection to 10.96.32.139 80 port [tcp/http] succeeded!\nI0113 07:45:44.150754 2182 log.go:181] (0x40007c0000) (5) Data frame sent\nI0113 07:45:44.151013 2182 log.go:181] (0x40005e2aa0) (1) Data frame sent\nI0113 07:45:44.151317 2182 log.go:181] (0x40005e7e40) Data frame received for 5\nI0113 07:45:44.151447 2182 log.go:181] (0x40007c0000) (5) Data frame handling\nI0113 07:45:44.152323 2182 log.go:181] (0x40005e7e40) (0x40005e2aa0) Stream removed, broadcasting: 1\nI0113 07:45:44.154774 2182 log.go:181] (0x40005e7e40) Go away received\nI0113 07:45:44.159465 2182 log.go:181] (0x40005e7e40) (0x40005e2aa0) Stream removed, broadcasting: 1\nI0113 07:45:44.159895 2182 log.go:181] (0x40005e7e40) (0x4000bd4000) Stream removed, broadcasting: 3\nI0113 07:45:44.160453 2182 log.go:181] (0x40005e7e40) (0x40007c0000) Stream removed, broadcasting: 5\n" Jan 13 07:45:44.170: INFO: stdout: "" Jan 13 07:45:44.170: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-4909 exec execpod-affinityw8wrp -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.13 30581' Jan 13 07:45:45.772: INFO: stderr: "I0113 07:45:45.638164 2202 log.go:181] (0x40006c0420) (0x4000c981e0) Create stream\nI0113 07:45:45.642182 2202 log.go:181] (0x40006c0420) (0x4000c981e0) Stream added, broadcasting: 1\nI0113 07:45:45.658035 2202 log.go:181] (0x40006c0420) Reply frame received for 1\nI0113 07:45:45.659613 2202 log.go:181] (0x40006c0420) (0x4000d94000) Create stream\nI0113 07:45:45.659753 2202 log.go:181] (0x40006c0420) (0x4000d94000) Stream added, broadcasting: 3\nI0113 07:45:45.661847 2202 log.go:181] (0x40006c0420) Reply frame received for 3\nI0113 07:45:45.662363 2202 log.go:181] (0x40006c0420) (0x4000c98320) Create stream\nI0113 07:45:45.662480 2202 log.go:181] (0x40006c0420) (0x4000c98320) Stream added, broadcasting: 5\nI0113 07:45:45.664079 2202 log.go:181] (0x40006c0420) Reply frame received for 5\nI0113 07:45:45.750547 2202 log.go:181] (0x40006c0420) Data frame received for 3\nI0113 07:45:45.751374 2202 log.go:181] (0x40006c0420) Data frame received for 5\nI0113 07:45:45.751656 2202 log.go:181] (0x4000c98320) (5) Data frame handling\nI0113 07:45:45.751783 2202 log.go:181] (0x40006c0420) Data frame received for 1\nI0113 07:45:45.751952 2202 log.go:181] (0x4000c981e0) (1) Data frame handling\nI0113 07:45:45.752312 2202 log.go:181] (0x4000d94000) (3) Data frame handling\nI0113 07:45:45.753751 2202 log.go:181] (0x4000c981e0) (1) Data frame sent\nI0113 07:45:45.753873 2202 log.go:181] (0x4000c98320) (5) Data frame sent\nI0113 07:45:45.754431 2202 log.go:181] (0x40006c0420) Data frame received for 5\nI0113 07:45:45.754503 2202 log.go:181] (0x4000c98320) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.13 30581\nI0113 07:45:45.756615 2202 log.go:181] (0x40006c0420) (0x4000c981e0) Stream removed, broadcasting: 1\nConnection to 172.18.0.13 30581 port [tcp/30581] succeeded!\nI0113 07:45:45.759458 2202 log.go:181] (0x4000c98320) (5) Data frame sent\nI0113 07:45:45.759615 2202 log.go:181] (0x40006c0420) Data frame received for 5\nI0113 07:45:45.759736 2202 log.go:181] (0x4000c98320) (5) Data frame handling\nI0113 07:45:45.761277 2202 log.go:181] (0x40006c0420) Go away received\nI0113 07:45:45.764477 2202 log.go:181] (0x40006c0420) (0x4000c981e0) Stream removed, broadcasting: 1\nI0113 07:45:45.764754 2202 log.go:181] (0x40006c0420) (0x4000d94000) Stream removed, broadcasting: 3\nI0113 07:45:45.765035 2202 log.go:181] (0x40006c0420) (0x4000c98320) Stream removed, broadcasting: 5\n" Jan 13 07:45:45.773: INFO: stdout: "" Jan 13 07:45:45.774: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-4909 exec execpod-affinityw8wrp -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.12 30581' Jan 13 07:45:47.368: INFO: stderr: "I0113 07:45:47.234445 2222 log.go:181] (0x4000e90000) (0x4000528820) Create stream\nI0113 07:45:47.239405 2222 log.go:181] (0x4000e90000) (0x4000528820) Stream added, broadcasting: 1\nI0113 07:45:47.254026 2222 log.go:181] (0x4000e90000) Reply frame received for 1\nI0113 07:45:47.255126 2222 log.go:181] (0x4000e90000) (0x4000388500) Create stream\nI0113 07:45:47.255276 2222 log.go:181] (0x4000e90000) (0x4000388500) Stream added, broadcasting: 3\nI0113 07:45:47.257654 2222 log.go:181] (0x4000e90000) Reply frame received for 3\nI0113 07:45:47.258133 2222 log.go:181] (0x4000e90000) (0x4000bba000) Create stream\nI0113 07:45:47.258242 2222 log.go:181] (0x4000e90000) (0x4000bba000) Stream added, broadcasting: 5\nI0113 07:45:47.259833 2222 log.go:181] (0x4000e90000) Reply frame received for 5\nI0113 07:45:47.349246 2222 log.go:181] (0x4000e90000) Data frame received for 5\nI0113 07:45:47.349544 2222 log.go:181] (0x4000bba000) (5) Data frame handling\nI0113 07:45:47.349928 2222 log.go:181] (0x4000e90000) Data frame received for 3\nI0113 07:45:47.350269 2222 log.go:181] (0x4000388500) (3) Data frame handling\nI0113 07:45:47.350662 2222 log.go:181] (0x4000bba000) (5) Data frame sent\nI0113 07:45:47.350971 2222 log.go:181] (0x4000e90000) Data frame received for 1\n+ nc -zv -t -w 2 172.18.0.12 30581\nConnection to 172.18.0.12 30581 port [tcp/30581] succeeded!\nI0113 07:45:47.351136 2222 log.go:181] (0x4000528820) (1) Data frame handling\nI0113 07:45:47.351300 2222 log.go:181] (0x4000528820) (1) Data frame sent\nI0113 07:45:47.351874 2222 log.go:181] (0x4000e90000) Data frame received for 5\nI0113 07:45:47.352014 2222 log.go:181] (0x4000bba000) (5) Data frame handling\nI0113 07:45:47.353752 2222 log.go:181] (0x4000e90000) (0x4000528820) Stream removed, broadcasting: 1\nI0113 07:45:47.357649 2222 log.go:181] (0x4000e90000) Go away received\nI0113 07:45:47.360263 2222 log.go:181] (0x4000e90000) (0x4000528820) Stream removed, broadcasting: 1\nI0113 07:45:47.360603 2222 log.go:181] (0x4000e90000) (0x4000388500) Stream removed, broadcasting: 3\nI0113 07:45:47.360929 2222 log.go:181] (0x4000e90000) (0x4000bba000) Stream removed, broadcasting: 5\n" Jan 13 07:45:47.369: INFO: stdout: "" Jan 13 07:45:47.370: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-4909 exec execpod-affinityw8wrp -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.13:30581/ ; done' Jan 13 07:45:49.043: INFO: stderr: "I0113 07:45:48.805357 2243 log.go:181] (0x4000232370) (0x400064a1e0) Create stream\nI0113 07:45:48.809967 2243 log.go:181] (0x4000232370) (0x400064a1e0) Stream added, broadcasting: 1\nI0113 07:45:48.821744 2243 log.go:181] (0x4000232370) Reply frame received for 1\nI0113 07:45:48.822328 2243 log.go:181] (0x4000232370) (0x4000b58000) Create stream\nI0113 07:45:48.822395 2243 log.go:181] (0x4000232370) (0x4000b58000) Stream added, broadcasting: 3\nI0113 07:45:48.823727 2243 log.go:181] (0x4000232370) Reply frame received for 3\nI0113 07:45:48.823941 2243 log.go:181] (0x4000232370) (0x400064a280) Create stream\nI0113 07:45:48.823993 2243 log.go:181] (0x4000232370) (0x400064a280) Stream added, broadcasting: 5\nI0113 07:45:48.825317 2243 log.go:181] (0x4000232370) Reply frame received for 5\nI0113 07:45:48.918994 2243 log.go:181] (0x4000232370) Data frame received for 3\nI0113 07:45:48.919321 2243 log.go:181] (0x4000232370) Data frame received for 5\nI0113 07:45:48.919431 2243 log.go:181] (0x400064a280) (5) Data frame handling\nI0113 07:45:48.919614 2243 log.go:181] (0x4000b58000) (3) Data frame handling\nI0113 07:45:48.919861 2243 log.go:181] (0x400064a280) (5) Data frame sent\nI0113 07:45:48.920030 2243 log.go:181] (0x4000b58000) (3) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30581/\nI0113 07:45:48.925539 2243 log.go:181] (0x4000232370) Data frame received for 3\nI0113 07:45:48.925606 2243 log.go:181] (0x4000b58000) (3) Data frame handling\nI0113 07:45:48.925708 2243 log.go:181] (0x4000b58000) (3) Data frame sent\nI0113 07:45:48.925799 2243 log.go:181] (0x4000232370) Data frame received for 3\nI0113 07:45:48.925896 2243 log.go:181] (0x4000b58000) (3) Data frame handling\nI0113 07:45:48.925978 2243 log.go:181] (0x4000b58000) (3) Data frame sent\nI0113 07:45:48.926038 2243 log.go:181] (0x4000232370) Data frame received for 5\nI0113 07:45:48.926099 2243 log.go:181] (0x400064a280) (5) Data frame handling\nI0113 07:45:48.926173 2243 log.go:181] (0x400064a280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30581/\nI0113 07:45:48.931138 2243 log.go:181] (0x4000232370) Data frame received for 3\nI0113 07:45:48.931201 2243 log.go:181] (0x4000b58000) (3) Data frame handling\nI0113 07:45:48.931269 2243 log.go:181] (0x4000b58000) (3) Data frame sent\nI0113 07:45:48.931869 2243 log.go:181] (0x4000232370) Data frame received for 5\nI0113 07:45:48.931952 2243 log.go:181] (0x400064a280) (5) Data frame handling\nI0113 07:45:48.932022 2243 log.go:181] (0x400064a280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30581/\nI0113 07:45:48.932090 2243 log.go:181] (0x4000232370) Data frame received for 3\nI0113 07:45:48.932151 2243 log.go:181] (0x4000b58000) (3) Data frame handling\nI0113 07:45:48.932227 2243 log.go:181] (0x4000b58000) (3) Data frame sent\nI0113 07:45:48.939044 2243 log.go:181] (0x4000232370) Data frame received for 3\nI0113 07:45:48.939125 2243 log.go:181] (0x4000b58000) (3) Data frame handling\nI0113 07:45:48.939209 2243 log.go:181] (0x4000b58000) (3) Data frame sent\nI0113 07:45:48.939982 2243 log.go:181] (0x4000232370) Data frame received for 5\nI0113 07:45:48.940055 2243 log.go:181] (0x400064a280) (5) Data frame handling\nI0113 07:45:48.940137 2243 log.go:181] (0x400064a280) (5) Data frame sent\n+ echo\n+ curl -q -sI0113 07:45:48.940206 2243 log.go:181] (0x4000232370) Data frame received for 5\nI0113 07:45:48.940301 2243 log.go:181] (0x400064a280) (5) Data frame handling\nI0113 07:45:48.940406 2243 log.go:181] (0x400064a280) (5) Data frame sent\n --connect-timeout 2 http://172.18.0.13:30581/\nI0113 07:45:48.940494 2243 log.go:181] (0x4000232370) Data frame received for 3\nI0113 07:45:48.940567 2243 log.go:181] (0x4000b58000) (3) Data frame handling\nI0113 07:45:48.940653 2243 log.go:181] (0x4000b58000) (3) Data frame sent\nI0113 07:45:48.944199 2243 log.go:181] (0x4000232370) Data frame received for 3\nI0113 07:45:48.944300 2243 log.go:181] (0x4000b58000) (3) Data frame handling\nI0113 07:45:48.944430 2243 log.go:181] (0x4000b58000) (3) Data frame sent\nI0113 07:45:48.945365 2243 log.go:181] (0x4000232370) Data frame received for 5\nI0113 07:45:48.945541 2243 log.go:181] (0x400064a280) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeoutI0113 07:45:48.945702 2243 log.go:181] (0x4000232370) Data frame received for 3\nI0113 07:45:48.945874 2243 log.go:181] (0x4000b58000) (3) Data frame handling\nI0113 07:45:48.946047 2243 log.go:181] (0x400064a280) (5) Data frame sent\nI0113 07:45:48.946249 2243 log.go:181] (0x4000232370) Data frame received for 5\nI0113 07:45:48.946384 2243 log.go:181] (0x400064a280) (5) Data frame handling\n 2 http://172.18.0.13:30581/\nI0113 07:45:48.946508 2243 log.go:181] (0x4000b58000) (3) Data frame sent\nI0113 07:45:48.946681 2243 log.go:181] (0x400064a280) (5) Data frame sent\nI0113 07:45:48.951390 2243 log.go:181] (0x4000232370) Data frame received for 3\nI0113 07:45:48.951478 2243 log.go:181] (0x4000b58000) (3) Data frame handling\nI0113 07:45:48.951569 2243 log.go:181] (0x4000b58000) (3) Data frame sent\nI0113 07:45:48.952307 2243 log.go:181] (0x4000232370) Data frame received for 5\nI0113 07:45:48.952449 2243 log.go:181] (0x4000232370) Data frame received for 3\nI0113 07:45:48.952573 2243 log.go:181] (0x4000b58000) (3) Data frame handling\nI0113 07:45:48.952716 2243 log.go:181] (0x400064a280) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30581/\nI0113 07:45:48.952972 2243 log.go:181] (0x4000b58000) (3) Data frame sent\nI0113 07:45:48.953094 2243 log.go:181] (0x400064a280) (5) Data frame sent\nI0113 07:45:48.960156 2243 log.go:181] (0x4000232370) Data frame received for 3\nI0113 07:45:48.960252 2243 log.go:181] (0x4000b58000) (3) Data frame handling\nI0113 07:45:48.960340 2243 log.go:181] (0x4000b58000) (3) Data frame sent\nI0113 07:45:48.960469 2243 log.go:181] (0x4000232370) Data frame received for 5\nI0113 07:45:48.960589 2243 log.go:181] (0x400064a280) (5) Data frame handling\n+ echo\n+ curl -q -sI0113 07:45:48.960668 2243 log.go:181] (0x4000232370) Data frame received for 3\nI0113 07:45:48.960769 2243 log.go:181] (0x4000b58000) (3) Data frame handling\nI0113 07:45:48.960934 2243 log.go:181] (0x400064a280) (5) Data frame sent\nI0113 07:45:48.961036 2243 log.go:181] (0x4000232370) Data frame received for 5\nI0113 07:45:48.961115 2243 log.go:181] (0x400064a280) (5) Data frame handling\nI0113 07:45:48.961218 2243 log.go:181] (0x400064a280) (5) Data frame sent\n --connect-timeout 2 http://172.18.0.13:30581/\nI0113 07:45:48.961299 2243 log.go:181] (0x4000b58000) (3) Data frame sent\nI0113 07:45:48.967322 2243 log.go:181] (0x4000232370) Data frame received for 3\nI0113 07:45:48.967447 2243 log.go:181] (0x4000b58000) (3) Data frame handling\nI0113 07:45:48.967630 2243 log.go:181] (0x4000b58000) (3) Data frame sent\nI0113 07:45:48.968262 2243 log.go:181] (0x4000232370) Data frame received for 5\nI0113 07:45:48.968403 2243 log.go:181] (0x400064a280) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30581/\nI0113 07:45:48.968509 2243 log.go:181] (0x4000232370) Data frame received for 3\nI0113 07:45:48.968638 2243 log.go:181] (0x4000b58000) (3) Data frame handling\nI0113 07:45:48.968767 2243 log.go:181] (0x400064a280) (5) Data frame sent\nI0113 07:45:48.968980 2243 log.go:181] (0x4000b58000) (3) Data frame sent\nI0113 07:45:48.973193 2243 log.go:181] (0x4000232370) Data frame received for 3\nI0113 07:45:48.973308 2243 log.go:181] (0x4000b58000) (3) Data frame handling\nI0113 07:45:48.973452 2243 log.go:181] (0x4000b58000) (3) Data frame sent\nI0113 07:45:48.974361 2243 log.go:181] (0x4000232370) Data frame received for 5\nI0113 07:45:48.974472 2243 log.go:181] (0x400064a280) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30581/\nI0113 07:45:48.974589 2243 log.go:181] (0x4000232370) Data frame received for 3\nI0113 07:45:48.974761 2243 log.go:181] (0x4000b58000) (3) Data frame handling\nI0113 07:45:48.974892 2243 log.go:181] (0x400064a280) (5) Data frame sent\nI0113 07:45:48.975009 2243 log.go:181] (0x4000b58000) (3) Data frame sent\nI0113 07:45:48.980408 2243 log.go:181] (0x4000232370) Data frame received for 3\nI0113 07:45:48.980568 2243 log.go:181] (0x4000b58000) (3) Data frame handling\nI0113 07:45:48.980755 2243 log.go:181] (0x4000b58000) (3) Data frame sent\nI0113 07:45:48.981001 2243 log.go:181] (0x4000232370) Data frame received for 5\nI0113 07:45:48.981104 2243 log.go:181] (0x400064a280) (5) Data frame handling\nI0113 07:45:48.981198 2243 log.go:181] (0x400064a280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30581/\nI0113 07:45:48.981312 2243 log.go:181] (0x4000232370) Data frame received for 3\nI0113 07:45:48.981401 2243 log.go:181] (0x4000b58000) (3) Data frame handling\nI0113 07:45:48.981489 2243 log.go:181] (0x4000b58000) (3) Data frame sent\nI0113 07:45:48.987951 2243 log.go:181] (0x4000232370) Data frame received for 3\nI0113 07:45:48.988069 2243 log.go:181] (0x4000b58000) (3) Data frame handling\nI0113 07:45:48.988202 2243 log.go:181] (0x4000b58000) (3) Data frame sent\nI0113 07:45:48.989126 2243 log.go:181] (0x4000232370) Data frame received for 3\nI0113 07:45:48.989252 2243 log.go:181] (0x4000b58000) (3) Data frame handling\nI0113 07:45:48.989360 2243 log.go:181] (0x4000232370) Data frame received for 5\nI0113 07:45:48.989488 2243 log.go:181] (0x400064a280) (5) Data frame handling\nI0113 07:45:48.989585 2243 log.go:181] (0x400064a280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30581/\nI0113 07:45:48.989926 2243 log.go:181] (0x4000b58000) (3) Data frame sent\nI0113 07:45:48.995571 2243 log.go:181] (0x4000232370) Data frame received for 3\nI0113 07:45:48.995683 2243 log.go:181] (0x4000b58000) (3) Data frame handling\nI0113 07:45:48.995819 2243 log.go:181] (0x4000b58000) (3) Data frame sent\nI0113 07:45:48.996648 2243 log.go:181] (0x4000232370) Data frame received for 5\nI0113 07:45:48.996798 2243 log.go:181] (0x4000232370) Data frame received for 3\nI0113 07:45:48.997070 2243 log.go:181] (0x4000b58000) (3) Data frame handling\nI0113 07:45:48.997171 2243 log.go:181] (0x400064a280) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30581/\nI0113 07:45:48.997274 2243 log.go:181] (0x4000b58000) (3) Data frame sent\nI0113 07:45:48.997340 2243 log.go:181] (0x400064a280) (5) Data frame sent\nI0113 07:45:49.001426 2243 log.go:181] (0x4000232370) Data frame received for 3\nI0113 07:45:49.001488 2243 log.go:181] (0x4000b58000) (3) Data frame handling\nI0113 07:45:49.001563 2243 log.go:181] (0x4000b58000) (3) Data frame sent\nI0113 07:45:49.002319 2243 log.go:181] (0x4000232370) Data frame received for 3\nI0113 07:45:49.002393 2243 log.go:181] (0x4000b58000) (3) Data frame handling\nI0113 07:45:49.002450 2243 log.go:181] (0x4000b58000) (3) Data frame sent\nI0113 07:45:49.002513 2243 log.go:181] (0x4000232370) Data frame received for 5\nI0113 07:45:49.002581 2243 log.go:181] (0x400064a280) (5) Data frame handling\nI0113 07:45:49.002656 2243 log.go:181] (0x400064a280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30581/\nI0113 07:45:49.005834 2243 log.go:181] (0x4000232370) Data frame received for 3\nI0113 07:45:49.005893 2243 log.go:181] (0x4000b58000) (3) Data frame handling\nI0113 07:45:49.005966 2243 log.go:181] (0x4000b58000) (3) Data frame sent\nI0113 07:45:49.006920 2243 log.go:181] (0x4000232370) Data frame received for 5\nI0113 07:45:49.007021 2243 log.go:181] (0x400064a280) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30581/\nI0113 07:45:49.007117 2243 log.go:181] (0x4000232370) Data frame received for 3\nI0113 07:45:49.007260 2243 log.go:181] (0x4000b58000) (3) Data frame handling\nI0113 07:45:49.007374 2243 log.go:181] (0x400064a280) (5) Data frame sent\nI0113 07:45:49.007479 2243 log.go:181] (0x4000b58000) (3) Data frame sent\nI0113 07:45:49.011018 2243 log.go:181] (0x4000232370) Data frame received for 3\nI0113 07:45:49.011101 2243 log.go:181] (0x4000b58000) (3) Data frame handling\nI0113 07:45:49.011184 2243 log.go:181] (0x4000b58000) (3) Data frame sent\nI0113 07:45:49.012987 2243 log.go:181] (0x4000232370) Data frame received for 5\nI0113 07:45:49.013099 2243 log.go:181] (0x400064a280) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30581/\nI0113 07:45:49.013233 2243 log.go:181] (0x4000232370) Data frame received for 3\nI0113 07:45:49.013415 2243 log.go:181] (0x4000b58000) (3) Data frame handling\nI0113 07:45:49.013564 2243 log.go:181] (0x400064a280) (5) Data frame sent\nI0113 07:45:49.013669 2243 log.go:181] (0x4000b58000) (3) Data frame sent\nI0113 07:45:49.016407 2243 log.go:181] (0x4000232370) Data frame received for 3\nI0113 07:45:49.016507 2243 log.go:181] (0x4000b58000) (3) Data frame handling\nI0113 07:45:49.016597 2243 log.go:181] (0x4000b58000) (3) Data frame sent\nI0113 07:45:49.017351 2243 log.go:181] (0x4000232370) Data frame received for 5\nI0113 07:45:49.017433 2243 log.go:181] (0x400064a280) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30581/\nI0113 07:45:49.017519 2243 log.go:181] (0x4000232370) Data frame received for 3\nI0113 07:45:49.017632 2243 log.go:181] (0x4000b58000) (3) Data frame handling\nI0113 07:45:49.017761 2243 log.go:181] (0x4000b58000) (3) Data frame sent\nI0113 07:45:49.017854 2243 log.go:181] (0x400064a280) (5) Data frame sent\nI0113 07:45:49.023688 2243 log.go:181] (0x4000232370) Data frame received for 3\nI0113 07:45:49.023773 2243 log.go:181] (0x4000b58000) (3) Data frame handling\nI0113 07:45:49.023864 2243 log.go:181] (0x4000b58000) (3) Data frame sent\nI0113 07:45:49.024777 2243 log.go:181] (0x4000232370) Data frame received for 3\nI0113 07:45:49.024952 2243 log.go:181] (0x4000b58000) (3) Data frame handling\nI0113 07:45:49.025121 2243 log.go:181] (0x4000232370) Data frame received for 5\nI0113 07:45:49.025261 2243 log.go:181] (0x400064a280) (5) Data frame handling\nI0113 07:45:49.026893 2243 log.go:181] (0x4000232370) Data frame received for 1\nI0113 07:45:49.026992 2243 log.go:181] (0x400064a1e0) (1) Data frame handling\nI0113 07:45:49.027097 2243 log.go:181] (0x400064a1e0) (1) Data frame sent\nI0113 07:45:49.027979 2243 log.go:181] (0x4000232370) (0x400064a1e0) Stream removed, broadcasting: 1\nI0113 07:45:49.032161 2243 log.go:181] (0x4000232370) Go away received\nI0113 07:45:49.035208 2243 log.go:181] (0x4000232370) (0x400064a1e0) Stream removed, broadcasting: 1\nI0113 07:45:49.035820 2243 log.go:181] (0x4000232370) (0x4000b58000) Stream removed, broadcasting: 3\nI0113 07:45:49.036078 2243 log.go:181] (0x4000232370) (0x400064a280) Stream removed, broadcasting: 5\n" Jan 13 07:45:49.047: INFO: stdout: "\naffinity-nodeport-timeout-n88v6\naffinity-nodeport-timeout-n88v6\naffinity-nodeport-timeout-n88v6\naffinity-nodeport-timeout-n88v6\naffinity-nodeport-timeout-n88v6\naffinity-nodeport-timeout-n88v6\naffinity-nodeport-timeout-n88v6\naffinity-nodeport-timeout-n88v6\naffinity-nodeport-timeout-n88v6\naffinity-nodeport-timeout-n88v6\naffinity-nodeport-timeout-n88v6\naffinity-nodeport-timeout-n88v6\naffinity-nodeport-timeout-n88v6\naffinity-nodeport-timeout-n88v6\naffinity-nodeport-timeout-n88v6\naffinity-nodeport-timeout-n88v6" Jan 13 07:45:49.047: INFO: Received response from host: affinity-nodeport-timeout-n88v6 Jan 13 07:45:49.047: INFO: Received response from host: affinity-nodeport-timeout-n88v6 Jan 13 07:45:49.047: INFO: Received response from host: affinity-nodeport-timeout-n88v6 Jan 13 07:45:49.047: INFO: Received response from host: affinity-nodeport-timeout-n88v6 Jan 13 07:45:49.047: INFO: Received response from host: affinity-nodeport-timeout-n88v6 Jan 13 07:45:49.047: INFO: Received response from host: affinity-nodeport-timeout-n88v6 Jan 13 07:45:49.047: INFO: Received response from host: affinity-nodeport-timeout-n88v6 Jan 13 07:45:49.047: INFO: Received response from host: affinity-nodeport-timeout-n88v6 Jan 13 07:45:49.047: INFO: Received response from host: affinity-nodeport-timeout-n88v6 Jan 13 07:45:49.047: INFO: Received response from host: affinity-nodeport-timeout-n88v6 Jan 13 07:45:49.047: INFO: Received response from host: affinity-nodeport-timeout-n88v6 Jan 13 07:45:49.047: INFO: Received response from host: affinity-nodeport-timeout-n88v6 Jan 13 07:45:49.047: INFO: Received response from host: affinity-nodeport-timeout-n88v6 Jan 13 07:45:49.047: INFO: Received response from host: affinity-nodeport-timeout-n88v6 Jan 13 07:45:49.047: INFO: Received response from host: affinity-nodeport-timeout-n88v6 Jan 13 07:45:49.047: INFO: Received response from host: affinity-nodeport-timeout-n88v6 Jan 13 07:45:49.048: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-4909 exec execpod-affinityw8wrp -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.13:30581/' Jan 13 07:45:50.631: INFO: stderr: "I0113 07:45:50.499136 2263 log.go:181] (0x4000da0dc0) (0x40004f7900) Create stream\nI0113 07:45:50.502166 2263 log.go:181] (0x4000da0dc0) (0x40004f7900) Stream added, broadcasting: 1\nI0113 07:45:50.511656 2263 log.go:181] (0x4000da0dc0) Reply frame received for 1\nI0113 07:45:50.512188 2263 log.go:181] (0x4000da0dc0) (0x4000816320) Create stream\nI0113 07:45:50.512268 2263 log.go:181] (0x4000da0dc0) (0x4000816320) Stream added, broadcasting: 3\nI0113 07:45:50.513969 2263 log.go:181] (0x4000da0dc0) Reply frame received for 3\nI0113 07:45:50.514491 2263 log.go:181] (0x4000da0dc0) (0x4000416e60) Create stream\nI0113 07:45:50.514630 2263 log.go:181] (0x4000da0dc0) (0x4000416e60) Stream added, broadcasting: 5\nI0113 07:45:50.515945 2263 log.go:181] (0x4000da0dc0) Reply frame received for 5\nI0113 07:45:50.609646 2263 log.go:181] (0x4000da0dc0) Data frame received for 5\nI0113 07:45:50.610075 2263 log.go:181] (0x4000416e60) (5) Data frame handling\nI0113 07:45:50.610341 2263 log.go:181] (0x4000da0dc0) Data frame received for 3\nI0113 07:45:50.610428 2263 log.go:181] (0x4000816320) (3) Data frame handling\nI0113 07:45:50.611001 2263 log.go:181] (0x4000816320) (3) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30581/\nI0113 07:45:50.612061 2263 log.go:181] (0x4000416e60) (5) Data frame sent\nI0113 07:45:50.612377 2263 log.go:181] (0x4000da0dc0) Data frame received for 5\nI0113 07:45:50.612537 2263 log.go:181] (0x4000416e60) (5) Data frame handling\nI0113 07:45:50.613036 2263 log.go:181] (0x4000da0dc0) Data frame received for 3\nI0113 07:45:50.613211 2263 log.go:181] (0x4000816320) (3) Data frame handling\nI0113 07:45:50.613829 2263 log.go:181] (0x4000da0dc0) Data frame received for 1\nI0113 07:45:50.613942 2263 log.go:181] (0x40004f7900) (1) Data frame handling\nI0113 07:45:50.614072 2263 log.go:181] (0x40004f7900) (1) Data frame sent\nI0113 07:45:50.615783 2263 log.go:181] (0x4000da0dc0) (0x40004f7900) Stream removed, broadcasting: 1\nI0113 07:45:50.618646 2263 log.go:181] (0x4000da0dc0) Go away received\nI0113 07:45:50.622231 2263 log.go:181] (0x4000da0dc0) (0x40004f7900) Stream removed, broadcasting: 1\nI0113 07:45:50.622766 2263 log.go:181] (0x4000da0dc0) (0x4000816320) Stream removed, broadcasting: 3\nI0113 07:45:50.623276 2263 log.go:181] (0x4000da0dc0) (0x4000416e60) Stream removed, broadcasting: 5\n" Jan 13 07:45:50.632: INFO: stdout: "affinity-nodeport-timeout-n88v6" Jan 13 07:46:10.634: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-4909 exec execpod-affinityw8wrp -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.13:30581/' Jan 13 07:46:12.258: INFO: stderr: "I0113 07:46:12.152143 2283 log.go:181] (0x400003ac60) (0x4000a2a320) Create stream\nI0113 07:46:12.155078 2283 log.go:181] (0x400003ac60) (0x4000a2a320) Stream added, broadcasting: 1\nI0113 07:46:12.164394 2283 log.go:181] (0x400003ac60) Reply frame received for 1\nI0113 07:46:12.165026 2283 log.go:181] (0x400003ac60) (0x40003920a0) Create stream\nI0113 07:46:12.165098 2283 log.go:181] (0x400003ac60) (0x40003920a0) Stream added, broadcasting: 3\nI0113 07:46:12.166401 2283 log.go:181] (0x400003ac60) Reply frame received for 3\nI0113 07:46:12.166670 2283 log.go:181] (0x400003ac60) (0x40003926e0) Create stream\nI0113 07:46:12.166744 2283 log.go:181] (0x400003ac60) (0x40003926e0) Stream added, broadcasting: 5\nI0113 07:46:12.168056 2283 log.go:181] (0x400003ac60) Reply frame received for 5\nI0113 07:46:12.231968 2283 log.go:181] (0x400003ac60) Data frame received for 5\nI0113 07:46:12.232199 2283 log.go:181] (0x40003926e0) (5) Data frame handling\nI0113 07:46:12.232695 2283 log.go:181] (0x40003926e0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.13:30581/\nI0113 07:46:12.236702 2283 log.go:181] (0x400003ac60) Data frame received for 3\nI0113 07:46:12.236933 2283 log.go:181] (0x40003920a0) (3) Data frame handling\nI0113 07:46:12.237138 2283 log.go:181] (0x40003920a0) (3) Data frame sent\nI0113 07:46:12.237910 2283 log.go:181] (0x400003ac60) Data frame received for 3\nI0113 07:46:12.238078 2283 log.go:181] (0x400003ac60) Data frame received for 5\nI0113 07:46:12.238214 2283 log.go:181] (0x40003926e0) (5) Data frame handling\nI0113 07:46:12.238381 2283 log.go:181] (0x40003920a0) (3) Data frame handling\nI0113 07:46:12.240128 2283 log.go:181] (0x400003ac60) Data frame received for 1\nI0113 07:46:12.240272 2283 log.go:181] (0x4000a2a320) (1) Data frame handling\nI0113 07:46:12.240440 2283 log.go:181] (0x4000a2a320) (1) Data frame sent\nI0113 07:46:12.241652 2283 log.go:181] (0x400003ac60) (0x4000a2a320) Stream removed, broadcasting: 1\nI0113 07:46:12.245152 2283 log.go:181] (0x400003ac60) Go away received\nI0113 07:46:12.248436 2283 log.go:181] (0x400003ac60) (0x4000a2a320) Stream removed, broadcasting: 1\nI0113 07:46:12.249165 2283 log.go:181] (0x400003ac60) (0x40003920a0) Stream removed, broadcasting: 3\nI0113 07:46:12.249480 2283 log.go:181] (0x400003ac60) (0x40003926e0) Stream removed, broadcasting: 5\n" Jan 13 07:46:12.259: INFO: stdout: "affinity-nodeport-timeout-c4bwq" Jan 13 07:46:12.259: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-4909, will wait for the garbage collector to delete the pods Jan 13 07:46:12.391: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 7.953939ms Jan 13 07:46:13.092: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 700.498291ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:46:29.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4909" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:67.961 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":309,"completed":207,"skipped":3593,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:46:29.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Jan 13 07:46:30.126: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cbac9376-b8b8-4ff4-bb4a-99b4e0c50add" in namespace "downward-api-9062" to be "Succeeded or Failed" Jan 13 07:46:30.135: INFO: Pod "downwardapi-volume-cbac9376-b8b8-4ff4-bb4a-99b4e0c50add": Phase="Pending", Reason="", readiness=false. Elapsed: 8.982738ms Jan 13 07:46:32.143: INFO: Pod "downwardapi-volume-cbac9376-b8b8-4ff4-bb4a-99b4e0c50add": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01764933s Jan 13 07:46:34.150: INFO: Pod "downwardapi-volume-cbac9376-b8b8-4ff4-bb4a-99b4e0c50add": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023970346s STEP: Saw pod success Jan 13 07:46:34.150: INFO: Pod "downwardapi-volume-cbac9376-b8b8-4ff4-bb4a-99b4e0c50add" satisfied condition "Succeeded or Failed" Jan 13 07:46:34.155: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-cbac9376-b8b8-4ff4-bb4a-99b4e0c50add container client-container: STEP: delete the pod Jan 13 07:46:34.191: INFO: Waiting for pod downwardapi-volume-cbac9376-b8b8-4ff4-bb4a-99b4e0c50add to disappear Jan 13 07:46:34.227: INFO: Pod downwardapi-volume-cbac9376-b8b8-4ff4-bb4a-99b4e0c50add no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:46:34.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9062" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":309,"completed":208,"skipped":3602,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:46:34.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir 0644 on tmpfs Jan 13 07:46:34.378: INFO: Waiting up to 5m0s for pod "pod-f0284333-d376-4267-ba77-5c63921cd3d6" in namespace "emptydir-1582" to be "Succeeded or Failed" Jan 13 07:46:34.398: INFO: Pod "pod-f0284333-d376-4267-ba77-5c63921cd3d6": Phase="Pending", Reason="", readiness=false. Elapsed: 19.671108ms Jan 13 07:46:36.405: INFO: Pod "pod-f0284333-d376-4267-ba77-5c63921cd3d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026944134s Jan 13 07:46:38.412: INFO: Pod "pod-f0284333-d376-4267-ba77-5c63921cd3d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033417926s STEP: Saw pod success Jan 13 07:46:38.412: INFO: Pod "pod-f0284333-d376-4267-ba77-5c63921cd3d6" satisfied condition "Succeeded or Failed" Jan 13 07:46:38.417: INFO: Trying to get logs from node leguer-worker pod pod-f0284333-d376-4267-ba77-5c63921cd3d6 container test-container: STEP: delete the pod Jan 13 07:46:38.440: INFO: Waiting for pod pod-f0284333-d376-4267-ba77-5c63921cd3d6 to disappear Jan 13 07:46:38.450: INFO: Pod pod-f0284333-d376-4267-ba77-5c63921cd3d6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:46:38.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1582" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":209,"skipped":3604,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:46:38.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating secret with name secret-test-ef596a84-27a8-4629-97b9-90548fbe188a STEP: Creating a pod to test consume secrets Jan 13 07:46:38.813: INFO: Waiting up to 5m0s for pod "pod-secrets-834beeab-1d36-4a38-8e30-089193d557e1" in namespace "secrets-4043" to be "Succeeded or Failed" Jan 13 07:46:38.835: INFO: Pod "pod-secrets-834beeab-1d36-4a38-8e30-089193d557e1": Phase="Pending", Reason="", readiness=false. Elapsed: 22.386868ms Jan 13 07:46:40.844: INFO: Pod "pod-secrets-834beeab-1d36-4a38-8e30-089193d557e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030566556s Jan 13 07:46:42.853: INFO: Pod "pod-secrets-834beeab-1d36-4a38-8e30-089193d557e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040295697s STEP: Saw pod success Jan 13 07:46:42.854: INFO: Pod "pod-secrets-834beeab-1d36-4a38-8e30-089193d557e1" satisfied condition "Succeeded or Failed" Jan 13 07:46:42.858: INFO: Trying to get logs from node leguer-worker2 pod pod-secrets-834beeab-1d36-4a38-8e30-089193d557e1 container secret-env-test: STEP: delete the pod Jan 13 07:46:42.891: INFO: Waiting for pod pod-secrets-834beeab-1d36-4a38-8e30-089193d557e1 to disappear Jan 13 07:46:42.900: INFO: Pod pod-secrets-834beeab-1d36-4a38-8e30-089193d557e1 no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:46:42.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4043" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":309,"completed":210,"skipped":3629,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:46:42.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 13 07:46:47.059: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:46:47.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4848" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":309,"completed":211,"skipped":3641,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:46:47.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating secret with name secret-test-104d6a65-0a90-427c-9830-e9936c9b565a STEP: Creating a pod to test consume secrets Jan 13 07:46:47.276: INFO: Waiting up to 5m0s for pod "pod-secrets-dd070c3e-e765-4af3-975c-4ff7703db9bf" in namespace "secrets-1945" to be "Succeeded or Failed" Jan 13 07:46:47.286: INFO: Pod "pod-secrets-dd070c3e-e765-4af3-975c-4ff7703db9bf": Phase="Pending", Reason="", readiness=false. Elapsed: 9.953432ms Jan 13 07:46:49.297: INFO: Pod "pod-secrets-dd070c3e-e765-4af3-975c-4ff7703db9bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020275234s Jan 13 07:46:51.305: INFO: Pod "pod-secrets-dd070c3e-e765-4af3-975c-4ff7703db9bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028505089s STEP: Saw pod success Jan 13 07:46:51.305: INFO: Pod "pod-secrets-dd070c3e-e765-4af3-975c-4ff7703db9bf" satisfied condition "Succeeded or Failed" Jan 13 07:46:51.311: INFO: Trying to get logs from node leguer-worker2 pod pod-secrets-dd070c3e-e765-4af3-975c-4ff7703db9bf container secret-volume-test: STEP: delete the pod Jan 13 07:46:51.349: INFO: Waiting for pod pod-secrets-dd070c3e-e765-4af3-975c-4ff7703db9bf to disappear Jan 13 07:46:51.388: INFO: Pod pod-secrets-dd070c3e-e765-4af3-975c-4ff7703db9bf no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:46:51.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1945" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":212,"skipped":3695,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:46:51.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 13 07:46:54.601: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 13 07:46:56.645: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746120814, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746120814, loc:(*time.Location)(0x7089440)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746120814, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746120814, loc:(*time.Location)(0x7089440)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 13 07:46:59.353: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746120814, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746120814, loc:(*time.Location)(0x7089440)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746120814, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746120814, loc:(*time.Location)(0x7089440)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 13 07:47:00.653: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746120814, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746120814, loc:(*time.Location)(0x7089440)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746120814, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746120814, loc:(*time.Location)(0x7089440)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 13 07:47:03.699: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 07:47:03.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:47:04.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6454" for this suite. STEP: Destroying namespace "webhook-6454-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:13.632 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":309,"completed":213,"skipped":3745,"failed":0} S ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:47:05.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Jan 13 07:47:05.135: INFO: Waiting up to 5m0s for pod "downwardapi-volume-62509609-9146-4e71-9bad-5174b56221d0" in namespace "downward-api-580" to be "Succeeded or Failed" Jan 13 07:47:05.162: INFO: Pod "downwardapi-volume-62509609-9146-4e71-9bad-5174b56221d0": Phase="Pending", Reason="", readiness=false. Elapsed: 26.761007ms Jan 13 07:47:07.175: INFO: Pod "downwardapi-volume-62509609-9146-4e71-9bad-5174b56221d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039191567s Jan 13 07:47:09.183: INFO: Pod "downwardapi-volume-62509609-9146-4e71-9bad-5174b56221d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047416536s STEP: Saw pod success Jan 13 07:47:09.183: INFO: Pod "downwardapi-volume-62509609-9146-4e71-9bad-5174b56221d0" satisfied condition "Succeeded or Failed" Jan 13 07:47:09.188: INFO: Trying to get logs from node leguer-worker2 pod downwardapi-volume-62509609-9146-4e71-9bad-5174b56221d0 container client-container: STEP: delete the pod Jan 13 07:47:09.254: INFO: Waiting for pod downwardapi-volume-62509609-9146-4e71-9bad-5174b56221d0 to disappear Jan 13 07:47:09.260: INFO: Pod downwardapi-volume-62509609-9146-4e71-9bad-5174b56221d0 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:47:09.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-580" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":309,"completed":214,"skipped":3746,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:47:09.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-8093 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-8093 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8093 Jan 13 07:47:09.426: INFO: Found 0 stateful pods, waiting for 1 Jan 13 07:47:19.435: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Jan 13 07:47:19.441: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8093 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 13 07:47:21.113: INFO: stderr: "I0113 07:47:20.946561 2303 log.go:181] (0x4000baa000) (0x4000a645a0) Create stream\nI0113 07:47:20.949069 2303 log.go:181] (0x4000baa000) (0x4000a645a0) Stream added, broadcasting: 1\nI0113 07:47:20.970688 2303 log.go:181] (0x4000baa000) Reply frame received for 1\nI0113 07:47:20.971764 2303 log.go:181] (0x4000baa000) (0x4000ad6000) Create stream\nI0113 07:47:20.971860 2303 log.go:181] (0x4000baa000) (0x4000ad6000) Stream added, broadcasting: 3\nI0113 07:47:20.973638 2303 log.go:181] (0x4000baa000) Reply frame received for 3\nI0113 07:47:20.974008 2303 log.go:181] (0x4000baa000) (0x4000a64d20) Create stream\nI0113 07:47:20.974088 2303 log.go:181] (0x4000baa000) (0x4000a64d20) Stream added, broadcasting: 5\nI0113 07:47:20.975556 2303 log.go:181] (0x4000baa000) Reply frame received for 5\nI0113 07:47:21.063248 2303 log.go:181] (0x4000baa000) Data frame received for 5\nI0113 07:47:21.063435 2303 log.go:181] (0x4000a64d20) (5) Data frame handling\nI0113 07:47:21.063809 2303 log.go:181] (0x4000a64d20) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0113 07:47:21.093125 2303 log.go:181] (0x4000baa000) Data frame received for 3\nI0113 07:47:21.093302 2303 log.go:181] (0x4000ad6000) (3) Data frame handling\nI0113 07:47:21.093582 2303 log.go:181] (0x4000baa000) Data frame received for 5\nI0113 07:47:21.093846 2303 log.go:181] (0x4000a64d20) (5) Data frame handling\nI0113 07:47:21.093970 2303 log.go:181] (0x4000ad6000) (3) Data frame sent\nI0113 07:47:21.094131 2303 log.go:181] (0x4000baa000) Data frame received for 3\nI0113 07:47:21.094238 2303 log.go:181] (0x4000ad6000) (3) Data frame handling\nI0113 07:47:21.095424 2303 log.go:181] (0x4000baa000) Data frame received for 1\nI0113 07:47:21.095567 2303 log.go:181] (0x4000a645a0) (1) Data frame handling\nI0113 07:47:21.095689 2303 log.go:181] (0x4000a645a0) (1) Data frame sent\nI0113 07:47:21.097216 2303 log.go:181] (0x4000baa000) (0x4000a645a0) Stream removed, broadcasting: 1\nI0113 07:47:21.100771 2303 log.go:181] (0x4000baa000) Go away received\nI0113 07:47:21.104359 2303 log.go:181] (0x4000baa000) (0x4000a645a0) Stream removed, broadcasting: 1\nI0113 07:47:21.104650 2303 log.go:181] (0x4000baa000) (0x4000ad6000) Stream removed, broadcasting: 3\nI0113 07:47:21.104916 2303 log.go:181] (0x4000baa000) (0x4000a64d20) Stream removed, broadcasting: 5\n" Jan 13 07:47:21.115: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 13 07:47:21.116: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 13 07:47:21.123: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jan 13 07:47:31.132: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 13 07:47:31.133: INFO: Waiting for statefulset status.replicas updated to 0 Jan 13 07:47:31.160: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.99994122s Jan 13 07:47:32.168: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.99188445s Jan 13 07:47:33.187: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.983371301s Jan 13 07:47:34.195: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.964559442s Jan 13 07:47:35.204: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.956913942s Jan 13 07:47:36.213: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.947987586s Jan 13 07:47:37.220: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.939021233s Jan 13 07:47:38.228: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.931528936s Jan 13 07:47:39.238: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.923624345s Jan 13 07:47:40.245: INFO: Verifying statefulset ss doesn't scale past 1 for another 914.004882ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8093 Jan 13 07:47:41.255: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8093 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 07:47:42.862: INFO: stderr: "I0113 07:47:42.761314 2323 log.go:181] (0x40001a8370) (0x40007e00a0) Create stream\nI0113 07:47:42.765649 2323 log.go:181] (0x40001a8370) (0x40007e00a0) Stream added, broadcasting: 1\nI0113 07:47:42.774013 2323 log.go:181] (0x40001a8370) Reply frame received for 1\nI0113 07:47:42.774553 2323 log.go:181] (0x40001a8370) (0x400038b540) Create stream\nI0113 07:47:42.774609 2323 log.go:181] (0x40001a8370) (0x400038b540) Stream added, broadcasting: 3\nI0113 07:47:42.776311 2323 log.go:181] (0x40001a8370) Reply frame received for 3\nI0113 07:47:42.776799 2323 log.go:181] (0x40001a8370) (0x4000135f40) Create stream\nI0113 07:47:42.776996 2323 log.go:181] (0x40001a8370) (0x4000135f40) Stream added, broadcasting: 5\nI0113 07:47:42.778820 2323 log.go:181] (0x40001a8370) Reply frame received for 5\nI0113 07:47:42.844503 2323 log.go:181] (0x40001a8370) Data frame received for 3\nI0113 07:47:42.845057 2323 log.go:181] (0x40001a8370) Data frame received for 5\nI0113 07:47:42.845460 2323 log.go:181] (0x4000135f40) (5) Data frame handling\nI0113 07:47:42.846583 2323 log.go:181] (0x4000135f40) (5) Data frame sent\nI0113 07:47:42.847054 2323 log.go:181] (0x400038b540) (3) Data frame handling\nI0113 07:47:42.847196 2323 log.go:181] (0x400038b540) (3) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0113 07:47:42.847607 2323 log.go:181] (0x40001a8370) Data frame received for 1\nI0113 07:47:42.847696 2323 log.go:181] (0x40007e00a0) (1) Data frame handling\nI0113 07:47:42.847783 2323 log.go:181] (0x40007e00a0) (1) Data frame sent\nI0113 07:47:42.847955 2323 log.go:181] (0x40001a8370) Data frame received for 3\nI0113 07:47:42.848079 2323 log.go:181] (0x400038b540) (3) Data frame handling\nI0113 07:47:42.848259 2323 log.go:181] (0x40001a8370) Data frame received for 5\nI0113 07:47:42.848470 2323 log.go:181] (0x4000135f40) (5) Data frame handling\nI0113 07:47:42.850266 2323 log.go:181] (0x40001a8370) (0x40007e00a0) Stream removed, broadcasting: 1\nI0113 07:47:42.853214 2323 log.go:181] (0x40001a8370) Go away received\nI0113 07:47:42.855199 2323 log.go:181] (0x40001a8370) (0x40007e00a0) Stream removed, broadcasting: 1\nI0113 07:47:42.855710 2323 log.go:181] (0x40001a8370) (0x400038b540) Stream removed, broadcasting: 3\nI0113 07:47:42.855862 2323 log.go:181] (0x40001a8370) (0x4000135f40) Stream removed, broadcasting: 5\n" Jan 13 07:47:42.863: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 13 07:47:42.863: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 13 07:47:42.870: INFO: Found 1 stateful pods, waiting for 3 Jan 13 07:47:52.882: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 13 07:47:52.882: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 13 07:47:52.882: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Jan 13 07:47:52.895: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8093 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 13 07:47:54.448: INFO: stderr: "I0113 07:47:54.310025 2343 log.go:181] (0x40001304d0) (0x4000608c80) Create stream\nI0113 07:47:54.315323 2343 log.go:181] (0x40001304d0) (0x4000608c80) Stream added, broadcasting: 1\nI0113 07:47:54.329745 2343 log.go:181] (0x40001304d0) Reply frame received for 1\nI0113 07:47:54.330595 2343 log.go:181] (0x40001304d0) (0x400020f900) Create stream\nI0113 07:47:54.330716 2343 log.go:181] (0x40001304d0) (0x400020f900) Stream added, broadcasting: 3\nI0113 07:47:54.332927 2343 log.go:181] (0x40001304d0) Reply frame received for 3\nI0113 07:47:54.333481 2343 log.go:181] (0x40001304d0) (0x40001b3cc0) Create stream\nI0113 07:47:54.333602 2343 log.go:181] (0x40001304d0) (0x40001b3cc0) Stream added, broadcasting: 5\nI0113 07:47:54.335037 2343 log.go:181] (0x40001304d0) Reply frame received for 5\nI0113 07:47:54.423104 2343 log.go:181] (0x40001304d0) Data frame received for 3\nI0113 07:47:54.423471 2343 log.go:181] (0x400020f900) (3) Data frame handling\nI0113 07:47:54.424337 2343 log.go:181] (0x400020f900) (3) Data frame sent\nI0113 07:47:54.426330 2343 log.go:181] (0x40001304d0) Data frame received for 5\nI0113 07:47:54.426501 2343 log.go:181] (0x40001304d0) Data frame received for 3\nI0113 07:47:54.427389 2343 log.go:181] (0x400020f900) (3) Data frame handling\nI0113 07:47:54.428126 2343 log.go:181] (0x40001b3cc0) (5) Data frame handling\nI0113 07:47:54.428346 2343 log.go:181] (0x40001b3cc0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0113 07:47:54.429874 2343 log.go:181] (0x40001304d0) Data frame received for 1\nI0113 07:47:54.430057 2343 log.go:181] (0x4000608c80) (1) Data frame handling\nI0113 07:47:54.430215 2343 log.go:181] (0x40001304d0) Data frame received for 5\nI0113 07:47:54.430351 2343 log.go:181] (0x40001b3cc0) (5) Data frame handling\nI0113 07:47:54.430475 2343 log.go:181] (0x4000608c80) (1) Data frame sent\nI0113 07:47:54.431282 2343 log.go:181] (0x40001304d0) (0x4000608c80) Stream removed, broadcasting: 1\nI0113 07:47:54.435056 2343 log.go:181] (0x40001304d0) Go away received\nI0113 07:47:54.438799 2343 log.go:181] (0x40001304d0) (0x4000608c80) Stream removed, broadcasting: 1\nI0113 07:47:54.439303 2343 log.go:181] (0x40001304d0) (0x400020f900) Stream removed, broadcasting: 3\nI0113 07:47:54.439627 2343 log.go:181] (0x40001304d0) (0x40001b3cc0) Stream removed, broadcasting: 5\n" Jan 13 07:47:54.449: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 13 07:47:54.449: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 13 07:47:54.450: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8093 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 13 07:47:56.116: INFO: stderr: "I0113 07:47:55.936221 2363 log.go:181] (0x4000634000) (0x4000a1e140) Create stream\nI0113 07:47:55.941004 2363 log.go:181] (0x4000634000) (0x4000a1e140) Stream added, broadcasting: 1\nI0113 07:47:55.954589 2363 log.go:181] (0x4000634000) Reply frame received for 1\nI0113 07:47:55.955126 2363 log.go:181] (0x4000634000) (0x400054a000) Create stream\nI0113 07:47:55.955203 2363 log.go:181] (0x4000634000) (0x400054a000) Stream added, broadcasting: 3\nI0113 07:47:55.956559 2363 log.go:181] (0x4000634000) Reply frame received for 3\nI0113 07:47:55.956774 2363 log.go:181] (0x4000634000) (0x4000a1e1e0) Create stream\nI0113 07:47:55.956827 2363 log.go:181] (0x4000634000) (0x4000a1e1e0) Stream added, broadcasting: 5\nI0113 07:47:55.958399 2363 log.go:181] (0x4000634000) Reply frame received for 5\nI0113 07:47:56.054298 2363 log.go:181] (0x4000634000) Data frame received for 5\nI0113 07:47:56.054656 2363 log.go:181] (0x4000a1e1e0) (5) Data frame handling\nI0113 07:47:56.055611 2363 log.go:181] (0x4000a1e1e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0113 07:47:56.100433 2363 log.go:181] (0x4000634000) Data frame received for 5\nI0113 07:47:56.100610 2363 log.go:181] (0x4000a1e1e0) (5) Data frame handling\nI0113 07:47:56.100797 2363 log.go:181] (0x4000634000) Data frame received for 3\nI0113 07:47:56.101049 2363 log.go:181] (0x400054a000) (3) Data frame handling\nI0113 07:47:56.101242 2363 log.go:181] (0x400054a000) (3) Data frame sent\nI0113 07:47:56.101430 2363 log.go:181] (0x4000634000) Data frame received for 3\nI0113 07:47:56.101578 2363 log.go:181] (0x400054a000) (3) Data frame handling\nI0113 07:47:56.102137 2363 log.go:181] (0x4000634000) Data frame received for 1\nI0113 07:47:56.102237 2363 log.go:181] (0x4000a1e140) (1) Data frame handling\nI0113 07:47:56.102329 2363 log.go:181] (0x4000a1e140) (1) Data frame sent\nI0113 07:47:56.104819 2363 log.go:181] (0x4000634000) (0x4000a1e140) Stream removed, broadcasting: 1\nI0113 07:47:56.105645 2363 log.go:181] (0x4000634000) Go away received\nI0113 07:47:56.109376 2363 log.go:181] (0x4000634000) (0x4000a1e140) Stream removed, broadcasting: 1\nI0113 07:47:56.109714 2363 log.go:181] (0x4000634000) (0x400054a000) Stream removed, broadcasting: 3\nI0113 07:47:56.109941 2363 log.go:181] (0x4000634000) (0x4000a1e1e0) Stream removed, broadcasting: 5\n" Jan 13 07:47:56.117: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 13 07:47:56.118: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 13 07:47:56.118: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8093 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 13 07:47:57.759: INFO: stderr: "I0113 07:47:57.611532 2383 log.go:181] (0x400023e160) (0x4000513540) Create stream\nI0113 07:47:57.617685 2383 log.go:181] (0x400023e160) (0x4000513540) Stream added, broadcasting: 1\nI0113 07:47:57.631178 2383 log.go:181] (0x400023e160) Reply frame received for 1\nI0113 07:47:57.632027 2383 log.go:181] (0x400023e160) (0x40009af5e0) Create stream\nI0113 07:47:57.632114 2383 log.go:181] (0x400023e160) (0x40009af5e0) Stream added, broadcasting: 3\nI0113 07:47:57.633610 2383 log.go:181] (0x400023e160) Reply frame received for 3\nI0113 07:47:57.633857 2383 log.go:181] (0x400023e160) (0x4000722000) Create stream\nI0113 07:47:57.633939 2383 log.go:181] (0x400023e160) (0x4000722000) Stream added, broadcasting: 5\nI0113 07:47:57.635198 2383 log.go:181] (0x400023e160) Reply frame received for 5\nI0113 07:47:57.696039 2383 log.go:181] (0x400023e160) Data frame received for 5\nI0113 07:47:57.696484 2383 log.go:181] (0x4000722000) (5) Data frame handling\nI0113 07:47:57.697638 2383 log.go:181] (0x4000722000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0113 07:47:57.736638 2383 log.go:181] (0x400023e160) Data frame received for 5\nI0113 07:47:57.736968 2383 log.go:181] (0x4000722000) (5) Data frame handling\nI0113 07:47:57.738540 2383 log.go:181] (0x400023e160) Data frame received for 3\nI0113 07:47:57.738669 2383 log.go:181] (0x40009af5e0) (3) Data frame handling\nI0113 07:47:57.738833 2383 log.go:181] (0x40009af5e0) (3) Data frame sent\nI0113 07:47:57.741306 2383 log.go:181] (0x400023e160) Data frame received for 3\nI0113 07:47:57.741454 2383 log.go:181] (0x40009af5e0) (3) Data frame handling\nI0113 07:47:57.741672 2383 log.go:181] (0x400023e160) Data frame received for 1\nI0113 07:47:57.741796 2383 log.go:181] (0x4000513540) (1) Data frame handling\nI0113 07:47:57.741901 2383 log.go:181] (0x4000513540) (1) Data frame sent\nI0113 07:47:57.742692 2383 log.go:181] (0x400023e160) (0x4000513540) Stream removed, broadcasting: 1\nI0113 07:47:57.745051 2383 log.go:181] (0x400023e160) Go away received\nI0113 07:47:57.749011 2383 log.go:181] (0x400023e160) (0x4000513540) Stream removed, broadcasting: 1\nI0113 07:47:57.749399 2383 log.go:181] (0x400023e160) (0x40009af5e0) Stream removed, broadcasting: 3\nI0113 07:47:57.749630 2383 log.go:181] (0x400023e160) (0x4000722000) Stream removed, broadcasting: 5\n" Jan 13 07:47:57.760: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 13 07:47:57.760: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 13 07:47:57.760: INFO: Waiting for statefulset status.replicas updated to 0 Jan 13 07:47:57.786: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Jan 13 07:48:07.802: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 13 07:48:07.803: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 13 07:48:07.803: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 13 07:48:07.835: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999994062s Jan 13 07:48:08.845: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.979648992s Jan 13 07:48:09.856: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.969279608s Jan 13 07:48:10.891: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.958280994s Jan 13 07:48:11.902: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.923428988s Jan 13 07:48:12.912: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.912399252s Jan 13 07:48:13.923: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.90211946s Jan 13 07:48:14.934: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.891057433s Jan 13 07:48:15.946: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.879973503s Jan 13 07:48:16.954: INFO: Verifying statefulset ss doesn't scale past 3 for another 868.550111ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8093 Jan 13 07:48:17.965: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8093 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 07:48:19.480: INFO: stderr: "I0113 07:48:19.357536 2403 log.go:181] (0x400066c000) (0x4000c8e000) Create stream\nI0113 07:48:19.361096 2403 log.go:181] (0x400066c000) (0x4000c8e000) Stream added, broadcasting: 1\nI0113 07:48:19.374408 2403 log.go:181] (0x400066c000) Reply frame received for 1\nI0113 07:48:19.374918 2403 log.go:181] (0x400066c000) (0x4000ba6000) Create stream\nI0113 07:48:19.374976 2403 log.go:181] (0x400066c000) (0x4000ba6000) Stream added, broadcasting: 3\nI0113 07:48:19.376327 2403 log.go:181] (0x400066c000) Reply frame received for 3\nI0113 07:48:19.376533 2403 log.go:181] (0x400066c000) (0x4000ba60a0) Create stream\nI0113 07:48:19.376585 2403 log.go:181] (0x400066c000) (0x4000ba60a0) Stream added, broadcasting: 5\nI0113 07:48:19.378013 2403 log.go:181] (0x400066c000) Reply frame received for 5\nI0113 07:48:19.456315 2403 log.go:181] (0x400066c000) Data frame received for 5\nI0113 07:48:19.456797 2403 log.go:181] (0x400066c000) Data frame received for 3\nI0113 07:48:19.457042 2403 log.go:181] (0x4000ba60a0) (5) Data frame handling\nI0113 07:48:19.457392 2403 log.go:181] (0x400066c000) Data frame received for 1\nI0113 07:48:19.457680 2403 log.go:181] (0x4000c8e000) (1) Data frame handling\nI0113 07:48:19.457914 2403 log.go:181] (0x4000ba6000) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0113 07:48:19.459830 2403 log.go:181] (0x4000ba6000) (3) Data frame sent\nI0113 07:48:19.460262 2403 log.go:181] (0x400066c000) Data frame received for 3\nI0113 07:48:19.460451 2403 log.go:181] (0x4000ba6000) (3) Data frame handling\nI0113 07:48:19.460660 2403 log.go:181] (0x4000ba60a0) (5) Data frame sent\nI0113 07:48:19.460795 2403 log.go:181] (0x400066c000) Data frame received for 5\nI0113 07:48:19.460969 2403 log.go:181] (0x4000c8e000) (1) Data frame sent\nI0113 07:48:19.461233 2403 log.go:181] (0x4000ba60a0) (5) Data frame handling\nI0113 07:48:19.464742 2403 log.go:181] (0x400066c000) (0x4000c8e000) Stream removed, broadcasting: 1\nI0113 07:48:19.465986 2403 log.go:181] (0x400066c000) Go away received\nI0113 07:48:19.471435 2403 log.go:181] (0x400066c000) (0x4000c8e000) Stream removed, broadcasting: 1\nI0113 07:48:19.471859 2403 log.go:181] (0x400066c000) (0x4000ba6000) Stream removed, broadcasting: 3\nI0113 07:48:19.472148 2403 log.go:181] (0x400066c000) (0x4000ba60a0) Stream removed, broadcasting: 5\n" Jan 13 07:48:19.481: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 13 07:48:19.482: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 13 07:48:19.482: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8093 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 07:48:20.979: INFO: stderr: "I0113 07:48:20.861726 2423 log.go:181] (0x4000131080) (0x4000a641e0) Create stream\nI0113 07:48:20.863936 2423 log.go:181] (0x4000131080) (0x4000a641e0) Stream added, broadcasting: 1\nI0113 07:48:20.877297 2423 log.go:181] (0x4000131080) Reply frame received for 1\nI0113 07:48:20.877920 2423 log.go:181] (0x4000131080) (0x40003ea000) Create stream\nI0113 07:48:20.877982 2423 log.go:181] (0x4000131080) (0x40003ea000) Stream added, broadcasting: 3\nI0113 07:48:20.879350 2423 log.go:181] (0x4000131080) Reply frame received for 3\nI0113 07:48:20.879753 2423 log.go:181] (0x4000131080) (0x4000a64280) Create stream\nI0113 07:48:20.879849 2423 log.go:181] (0x4000131080) (0x4000a64280) Stream added, broadcasting: 5\nI0113 07:48:20.881322 2423 log.go:181] (0x4000131080) Reply frame received for 5\nI0113 07:48:20.961785 2423 log.go:181] (0x4000131080) Data frame received for 5\nI0113 07:48:20.962448 2423 log.go:181] (0x4000131080) Data frame received for 1\nI0113 07:48:20.962549 2423 log.go:181] (0x4000a641e0) (1) Data frame handling\nI0113 07:48:20.962810 2423 log.go:181] (0x4000131080) Data frame received for 3\nI0113 07:48:20.963084 2423 log.go:181] (0x40003ea000) (3) Data frame handling\nI0113 07:48:20.963919 2423 log.go:181] (0x4000a641e0) (1) Data frame sent\nI0113 07:48:20.965265 2423 log.go:181] (0x40003ea000) (3) Data frame sent\nI0113 07:48:20.965331 2423 log.go:181] (0x4000131080) Data frame received for 3\nI0113 07:48:20.965451 2423 log.go:181] (0x4000a64280) (5) Data frame handling\nI0113 07:48:20.965545 2423 log.go:181] (0x4000a64280) (5) Data frame sent\nI0113 07:48:20.965607 2423 log.go:181] (0x4000131080) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0113 07:48:20.965946 2423 log.go:181] (0x4000131080) (0x4000a641e0) Stream removed, broadcasting: 1\nI0113 07:48:20.966805 2423 log.go:181] (0x4000a64280) (5) Data frame handling\nI0113 07:48:20.966959 2423 log.go:181] (0x40003ea000) (3) Data frame handling\nI0113 07:48:20.968034 2423 log.go:181] (0x4000131080) Go away received\nI0113 07:48:20.972449 2423 log.go:181] (0x4000131080) (0x4000a641e0) Stream removed, broadcasting: 1\nI0113 07:48:20.972669 2423 log.go:181] (0x4000131080) (0x40003ea000) Stream removed, broadcasting: 3\nI0113 07:48:20.972900 2423 log.go:181] (0x4000131080) (0x4000a64280) Stream removed, broadcasting: 5\n" Jan 13 07:48:20.980: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 13 07:48:20.980: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 13 07:48:20.980: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8093 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 07:48:22.425: INFO: rc: 1 Jan 13 07:48:22.426: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8093 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Jan 13 07:48:32.427: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8093 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 07:48:37.096: INFO: rc: 1 Jan 13 07:48:37.097: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8093 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 13 07:48:47.098: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8093 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 07:48:48.431: INFO: rc: 1 Jan 13 07:48:48.431: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8093 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 13 07:48:58.432: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8093 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 07:48:59.739: INFO: rc: 1 Jan 13 07:48:59.739: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8093 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 13 07:49:09.740: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8093 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 07:49:11.095: INFO: rc: 1 Jan 13 07:49:11.095: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8093 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 13 07:49:21.097: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8093 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 07:49:22.446: INFO: rc: 1 Jan 13 07:49:22.447: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8093 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 13 07:49:32.448: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8093 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 07:49:33.756: INFO: rc: 1 Jan 13 07:49:33.757: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8093 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 13 07:49:43.758: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8093 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 07:49:45.068: INFO: rc: 1 Jan 13 07:49:45.069: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8093 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 13 07:49:55.070: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8093 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 07:49:56.397: INFO: rc: 1 Jan 13 07:49:56.398: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8093 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 13 07:50:06.399: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8093 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 07:50:07.920: INFO: rc: 1 Jan 13 07:50:07.921: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8093 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 13 07:50:17.922: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8093 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 07:50:19.247: INFO: rc: 1 Jan 13 07:50:19.247: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8093 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 13 07:50:29.248: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8093 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 07:50:30.674: INFO: rc: 1 Jan 13 07:50:30.675: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8093 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 13 07:50:40.676: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8093 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 07:50:42.015: INFO: rc: 1 Jan 13 07:50:42.015: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8093 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 13 07:50:52.016: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8093 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 07:50:53.258: INFO: rc: 1 Jan 13 07:50:53.258: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8093 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 13 07:51:03.259: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8093 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 07:51:04.560: INFO: rc: 1 Jan 13 07:51:04.560: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8093 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 13 07:51:14.561: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8093 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 07:51:16.030: INFO: rc: 1 Jan 13 07:51:16.030: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8093 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 13 07:51:26.031: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8093 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 07:51:27.356: INFO: rc: 1 Jan 13 07:51:27.357: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8093 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 13 07:51:37.358: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8093 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 07:51:38.688: INFO: rc: 1 Jan 13 07:51:38.688: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8093 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 13 07:51:48.689: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8093 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 07:51:50.041: INFO: rc: 1 Jan 13 07:51:50.042: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8093 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 13 07:52:00.043: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8093 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 07:52:01.328: INFO: rc: 1 Jan 13 07:52:01.328: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8093 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 13 07:52:11.329: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8093 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 07:52:12.761: INFO: rc: 1 Jan 13 07:52:12.762: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8093 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 13 07:52:22.763: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8093 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 07:52:24.122: INFO: rc: 1 Jan 13 07:52:24.123: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8093 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 13 07:52:34.124: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8093 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 07:52:35.653: INFO: rc: 1 Jan 13 07:52:35.654: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8093 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 13 07:52:45.654: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8093 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 07:52:47.003: INFO: rc: 1 Jan 13 07:52:47.004: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8093 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 13 07:52:57.004: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8093 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 07:52:58.312: INFO: rc: 1 Jan 13 07:52:58.312: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8093 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 13 07:53:08.313: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8093 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 07:53:09.668: INFO: rc: 1 Jan 13 07:53:09.668: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8093 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 13 07:53:19.669: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8093 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 07:53:20.960: INFO: rc: 1 Jan 13 07:53:20.960: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8093 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 13 07:53:30.961: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-8093 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 07:53:32.440: INFO: rc: 1 Jan 13 07:53:32.440: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: Jan 13 07:53:32.440: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jan 13 07:53:32.453: INFO: Deleting all statefulset in ns statefulset-8093 Jan 13 07:53:32.457: INFO: Scaling statefulset ss to 0 Jan 13 07:53:32.472: INFO: Waiting for statefulset status.replicas updated to 0 Jan 13 07:53:32.475: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:53:32.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8093" for this suite. • [SLOW TEST:383.237 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":309,"completed":215,"skipped":3767,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:53:32.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: set up a multi version CRD Jan 13 07:53:32.658: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:55:48.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3755" for this suite. • [SLOW TEST:135.616 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":309,"completed":216,"skipped":3787,"failed":0} SS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:55:48.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test substitution in container's args Jan 13 07:55:48.268: INFO: Waiting up to 5m0s for pod "var-expansion-4fc12d8f-35a5-4e97-97f7-b76b8c33955d" in namespace "var-expansion-6801" to be "Succeeded or Failed" Jan 13 07:55:48.283: INFO: Pod "var-expansion-4fc12d8f-35a5-4e97-97f7-b76b8c33955d": Phase="Pending", Reason="", readiness=false. Elapsed: 15.603222ms Jan 13 07:55:50.366: INFO: Pod "var-expansion-4fc12d8f-35a5-4e97-97f7-b76b8c33955d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098409116s Jan 13 07:55:52.449: INFO: Pod "var-expansion-4fc12d8f-35a5-4e97-97f7-b76b8c33955d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.181274205s STEP: Saw pod success Jan 13 07:55:52.449: INFO: Pod "var-expansion-4fc12d8f-35a5-4e97-97f7-b76b8c33955d" satisfied condition "Succeeded or Failed" Jan 13 07:55:52.503: INFO: Trying to get logs from node leguer-worker2 pod var-expansion-4fc12d8f-35a5-4e97-97f7-b76b8c33955d container dapi-container: STEP: delete the pod Jan 13 07:55:52.653: INFO: Waiting for pod var-expansion-4fc12d8f-35a5-4e97-97f7-b76b8c33955d to disappear Jan 13 07:55:52.667: INFO: Pod var-expansion-4fc12d8f-35a5-4e97-97f7-b76b8c33955d no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:55:52.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6801" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":309,"completed":217,"skipped":3789,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:55:52.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir 0644 on node default medium Jan 13 07:55:52.772: INFO: Waiting up to 5m0s for pod "pod-192f9f47-076b-4e50-9838-e9e2636c9f79" in namespace "emptydir-3261" to be "Succeeded or Failed" Jan 13 07:55:52.786: INFO: Pod "pod-192f9f47-076b-4e50-9838-e9e2636c9f79": Phase="Pending", Reason="", readiness=false. Elapsed: 13.898508ms Jan 13 07:55:54.794: INFO: Pod "pod-192f9f47-076b-4e50-9838-e9e2636c9f79": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02149472s Jan 13 07:55:56.801: INFO: Pod "pod-192f9f47-076b-4e50-9838-e9e2636c9f79": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028022187s STEP: Saw pod success Jan 13 07:55:56.801: INFO: Pod "pod-192f9f47-076b-4e50-9838-e9e2636c9f79" satisfied condition "Succeeded or Failed" Jan 13 07:55:56.806: INFO: Trying to get logs from node leguer-worker2 pod pod-192f9f47-076b-4e50-9838-e9e2636c9f79 container test-container: STEP: delete the pod Jan 13 07:55:56.847: INFO: Waiting for pod pod-192f9f47-076b-4e50-9838-e9e2636c9f79 to disappear Jan 13 07:55:56.859: INFO: Pod pod-192f9f47-076b-4e50-9838-e9e2636c9f79 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:55:56.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3261" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":218,"skipped":3803,"failed":0} S ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:55:56.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:85 [It] deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 07:55:57.003: INFO: Pod name rollover-pod: Found 0 pods out of 1 Jan 13 07:56:02.396: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 13 07:56:02.396: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jan 13 07:56:04.411: INFO: Creating deployment "test-rollover-deployment" Jan 13 07:56:04.507: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jan 13 07:56:06.518: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jan 13 07:56:06.545: INFO: Ensure that both replica sets have 1 created replica Jan 13 07:56:06.555: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jan 13 07:56:06.568: INFO: Updating deployment test-rollover-deployment Jan 13 07:56:06.568: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jan 13 07:56:08.635: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jan 13 07:56:08.646: INFO: Make sure deployment "test-rollover-deployment" is complete Jan 13 07:56:08.658: INFO: all replica sets need to contain the pod-template-hash label Jan 13 07:56:08.659: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746121364, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746121364, loc:(*time.Location)(0x7089440)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746121366, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746121364, loc:(*time.Location)(0x7089440)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668db69979\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 13 07:56:10.675: INFO: all replica sets need to contain the pod-template-hash label Jan 13 07:56:10.676: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746121364, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746121364, loc:(*time.Location)(0x7089440)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746121370, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746121364, loc:(*time.Location)(0x7089440)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668db69979\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 13 07:56:12.675: INFO: all replica sets need to contain the pod-template-hash label Jan 13 07:56:12.675: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746121364, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746121364, loc:(*time.Location)(0x7089440)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746121370, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746121364, loc:(*time.Location)(0x7089440)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668db69979\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 13 07:56:14.673: INFO: all replica sets need to contain the pod-template-hash label Jan 13 07:56:14.674: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746121364, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746121364, loc:(*time.Location)(0x7089440)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746121370, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746121364, loc:(*time.Location)(0x7089440)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668db69979\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 13 07:56:16.673: INFO: all replica sets need to contain the pod-template-hash label Jan 13 07:56:16.673: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746121364, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746121364, loc:(*time.Location)(0x7089440)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746121370, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746121364, loc:(*time.Location)(0x7089440)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668db69979\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 13 07:56:18.672: INFO: all replica sets need to contain the pod-template-hash label Jan 13 07:56:18.672: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746121364, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746121364, loc:(*time.Location)(0x7089440)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746121370, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746121364, loc:(*time.Location)(0x7089440)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668db69979\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 13 07:56:20.954: INFO: Jan 13 07:56:20.955: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746121364, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746121364, loc:(*time.Location)(0x7089440)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746121370, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746121364, loc:(*time.Location)(0x7089440)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668db69979\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 13 07:56:22.677: INFO: Jan 13 07:56:22.677: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:79 Jan 13 07:56:22.694: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-9503 abc00f09-6f14-4c10-ae01-6389060f98b3 508583 2 2021-01-13 07:56:04 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-01-13 07:56:06 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-01-13 07:56:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x400668a5d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-01-13 07:56:04 +0000 UTC,LastTransitionTime:2021-01-13 07:56:04 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-668db69979" has successfully progressed.,LastUpdateTime:2021-01-13 07:56:21 +0000 UTC,LastTransitionTime:2021-01-13 07:56:04 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jan 13 07:56:22.703: INFO: New ReplicaSet "test-rollover-deployment-668db69979" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-668db69979 deployment-9503 405461f9-1053-4ab7-8e07-651587f8af83 508569 2 2021-01-13 07:56:06 +0000 UTC map[name:rollover-pod pod-template-hash:668db69979] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment abc00f09-6f14-4c10-ae01-6389060f98b3 0x400668aa97 0x400668aa98}] [] [{kube-controller-manager Update apps/v1 2021-01-13 07:56:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"abc00f09-6f14-4c10-ae01-6389060f98b3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 668db69979,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:668db69979] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x400668ab78 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 13 07:56:22.703: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jan 13 07:56:22.704: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-9503 f9c27f55-b28a-4a39-8cdd-278213802500 508581 2 2021-01-13 07:55:56 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment abc00f09-6f14-4c10-ae01-6389060f98b3 0x400668a977 0x400668a978}] [] [{e2e.test Update apps/v1 2021-01-13 07:55:56 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-01-13 07:56:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"abc00f09-6f14-4c10-ae01-6389060f98b3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0x400668aa28 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 13 07:56:22.705: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-78bc8b888c deployment-9503 827d1c45-7a64-4b31-9811-99d52dd1eb32 508535 2 2021-01-13 07:56:04 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment abc00f09-6f14-4c10-ae01-6389060f98b3 0x400668abe7 0x400668abe8}] [] [{kube-controller-manager Update apps/v1 2021-01-13 07:56:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"abc00f09-6f14-4c10-ae01-6389060f98b3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78bc8b888c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x400668acd8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 13 07:56:22.714: INFO: Pod "test-rollover-deployment-668db69979-pvlrj" is available: &Pod{ObjectMeta:{test-rollover-deployment-668db69979-pvlrj test-rollover-deployment-668db69979- deployment-9503 f1aa8b36-6ec8-4949-a7fc-b174e57d4271 508547 0 2021-01-13 07:56:06 +0000 UTC map[name:rollover-pod pod-template-hash:668db69979] map[] [{apps/v1 ReplicaSet test-rollover-deployment-668db69979 405461f9-1053-4ab7-8e07-651587f8af83 0x400668b257 0x400668b258}] [] [{kube-controller-manager Update v1 2021-01-13 07:56:06 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"405461f9-1053-4ab7-8e07-651587f8af83\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-13 07:56:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.61\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-85d6d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-85d6d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-85d6d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:56:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:56:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:56:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:56:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.61,StartTime:2021-01-13 07:56:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-13 07:56:09 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a,ContainerID:containerd://ebb5869416b495de62717b62f2446a0412e4b4ba7493d3ed9dc8825d8e9a218d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.61,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:56:22.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9503" for this suite. • [SLOW TEST:25.853 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":309,"completed":219,"skipped":3804,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:56:22.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1274.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-1274.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1274.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1274.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-1274.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1274.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 13 07:56:29.183: INFO: DNS probes using dns-1274/dns-test-4522c64b-5c16-4793-98c5-2683f0c1ff71 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:56:29.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1274" for this suite. • [SLOW TEST:6.579 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":309,"completed":220,"skipped":3878,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:56:29.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 13 07:56:32.083: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 13 07:56:34.127: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746121392, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746121392, loc:(*time.Location)(0x7089440)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746121392, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746121392, loc:(*time.Location)(0x7089440)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 13 07:56:37.194: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:56:37.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4162" for this suite. STEP: Destroying namespace "webhook-4162-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:8.264 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":309,"completed":221,"skipped":3889,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:56:37.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:299 [It] should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a replication controller Jan 13 07:56:37.765: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7564 create -f -' Jan 13 07:56:41.294: INFO: stderr: "" Jan 13 07:56:41.294: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 13 07:56:41.295: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7564 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 13 07:56:42.707: INFO: stderr: "" Jan 13 07:56:42.707: INFO: stdout: "update-demo-nautilus-g5wc7 update-demo-nautilus-mjwv4 " Jan 13 07:56:42.708: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7564 get pods update-demo-nautilus-g5wc7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 13 07:56:44.011: INFO: stderr: "" Jan 13 07:56:44.011: INFO: stdout: "" Jan 13 07:56:44.012: INFO: update-demo-nautilus-g5wc7 is created but not running Jan 13 07:56:49.013: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7564 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 13 07:56:50.530: INFO: stderr: "" Jan 13 07:56:50.530: INFO: stdout: "update-demo-nautilus-g5wc7 update-demo-nautilus-mjwv4 " Jan 13 07:56:50.531: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7564 get pods update-demo-nautilus-g5wc7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 13 07:56:51.839: INFO: stderr: "" Jan 13 07:56:51.839: INFO: stdout: "true" Jan 13 07:56:51.839: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7564 get pods update-demo-nautilus-g5wc7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 13 07:56:53.205: INFO: stderr: "" Jan 13 07:56:53.205: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 13 07:56:53.205: INFO: validating pod update-demo-nautilus-g5wc7 Jan 13 07:56:53.212: INFO: got data: { "image": "nautilus.jpg" } Jan 13 07:56:53.212: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 13 07:56:53.212: INFO: update-demo-nautilus-g5wc7 is verified up and running Jan 13 07:56:53.212: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7564 get pods update-demo-nautilus-mjwv4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 13 07:56:54.645: INFO: stderr: "" Jan 13 07:56:54.645: INFO: stdout: "true" Jan 13 07:56:54.646: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7564 get pods update-demo-nautilus-mjwv4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 13 07:56:55.968: INFO: stderr: "" Jan 13 07:56:55.968: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 13 07:56:55.968: INFO: validating pod update-demo-nautilus-mjwv4 Jan 13 07:56:55.975: INFO: got data: { "image": "nautilus.jpg" } Jan 13 07:56:55.975: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 13 07:56:55.975: INFO: update-demo-nautilus-mjwv4 is verified up and running STEP: scaling down the replication controller Jan 13 07:56:55.989: INFO: scanned /root for discovery docs: Jan 13 07:56:55.989: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7564 scale rc update-demo-nautilus --replicas=1 --timeout=5m' Jan 13 07:56:58.651: INFO: stderr: "" Jan 13 07:56:58.651: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 13 07:56:58.652: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7564 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 13 07:57:00.038: INFO: stderr: "" Jan 13 07:57:00.038: INFO: stdout: "update-demo-nautilus-g5wc7 update-demo-nautilus-mjwv4 " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 13 07:57:05.040: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7564 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 13 07:57:06.464: INFO: stderr: "" Jan 13 07:57:06.464: INFO: stdout: "update-demo-nautilus-g5wc7 update-demo-nautilus-mjwv4 " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 13 07:57:11.465: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7564 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 13 07:57:12.849: INFO: stderr: "" Jan 13 07:57:12.850: INFO: stdout: "update-demo-nautilus-g5wc7 update-demo-nautilus-mjwv4 " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 13 07:57:17.851: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7564 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 13 07:57:19.188: INFO: stderr: "" Jan 13 07:57:19.188: INFO: stdout: "update-demo-nautilus-g5wc7 update-demo-nautilus-mjwv4 " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 13 07:57:24.189: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7564 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 13 07:57:25.530: INFO: stderr: "" Jan 13 07:57:25.530: INFO: stdout: "update-demo-nautilus-g5wc7 update-demo-nautilus-mjwv4 " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 13 07:57:30.531: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7564 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 13 07:57:31.927: INFO: stderr: "" Jan 13 07:57:31.928: INFO: stdout: "update-demo-nautilus-mjwv4 " Jan 13 07:57:31.928: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7564 get pods update-demo-nautilus-mjwv4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 13 07:57:33.261: INFO: stderr: "" Jan 13 07:57:33.261: INFO: stdout: "true" Jan 13 07:57:33.262: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7564 get pods update-demo-nautilus-mjwv4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 13 07:57:34.667: INFO: stderr: "" Jan 13 07:57:34.667: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 13 07:57:34.667: INFO: validating pod update-demo-nautilus-mjwv4 Jan 13 07:57:34.673: INFO: got data: { "image": "nautilus.jpg" } Jan 13 07:57:34.673: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 13 07:57:34.673: INFO: update-demo-nautilus-mjwv4 is verified up and running STEP: scaling up the replication controller Jan 13 07:57:34.686: INFO: scanned /root for discovery docs: Jan 13 07:57:34.687: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7564 scale rc update-demo-nautilus --replicas=2 --timeout=5m' Jan 13 07:57:37.283: INFO: stderr: "" Jan 13 07:57:37.283: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 13 07:57:37.283: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7564 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 13 07:57:38.744: INFO: stderr: "" Jan 13 07:57:38.744: INFO: stdout: "update-demo-nautilus-mjwv4 update-demo-nautilus-p7f48 " Jan 13 07:57:38.744: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7564 get pods update-demo-nautilus-mjwv4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 13 07:57:40.070: INFO: stderr: "" Jan 13 07:57:40.070: INFO: stdout: "true" Jan 13 07:57:40.071: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7564 get pods update-demo-nautilus-mjwv4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 13 07:57:41.514: INFO: stderr: "" Jan 13 07:57:41.514: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 13 07:57:41.514: INFO: validating pod update-demo-nautilus-mjwv4 Jan 13 07:57:41.521: INFO: got data: { "image": "nautilus.jpg" } Jan 13 07:57:41.521: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 13 07:57:41.521: INFO: update-demo-nautilus-mjwv4 is verified up and running Jan 13 07:57:41.521: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7564 get pods update-demo-nautilus-p7f48 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 13 07:57:42.820: INFO: stderr: "" Jan 13 07:57:42.820: INFO: stdout: "true" Jan 13 07:57:42.820: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7564 get pods update-demo-nautilus-p7f48 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 13 07:57:44.171: INFO: stderr: "" Jan 13 07:57:44.171: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 13 07:57:44.171: INFO: validating pod update-demo-nautilus-p7f48 Jan 13 07:57:44.178: INFO: got data: { "image": "nautilus.jpg" } Jan 13 07:57:44.178: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 13 07:57:44.178: INFO: update-demo-nautilus-p7f48 is verified up and running STEP: using delete to clean up resources Jan 13 07:57:44.178: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7564 delete --grace-period=0 --force -f -' Jan 13 07:57:45.457: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 13 07:57:45.457: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jan 13 07:57:45.457: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7564 get rc,svc -l name=update-demo --no-headers' Jan 13 07:57:46.820: INFO: stderr: "No resources found in kubectl-7564 namespace.\n" Jan 13 07:57:46.820: INFO: stdout: "" Jan 13 07:57:46.821: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7564 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 13 07:57:49.363: INFO: stderr: "" Jan 13 07:57:49.365: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:57:49.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7564" for this suite. • [SLOW TEST:71.823 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:297 should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":309,"completed":222,"skipped":3897,"failed":0} S ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:57:49.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Jan 13 07:57:49.575: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3b5bbeea-b97c-4523-9f08-ef93c9aa4679" in namespace "downward-api-204" to be "Succeeded or Failed" Jan 13 07:57:49.592: INFO: Pod "downwardapi-volume-3b5bbeea-b97c-4523-9f08-ef93c9aa4679": Phase="Pending", Reason="", readiness=false. Elapsed: 16.823106ms Jan 13 07:57:51.643: INFO: Pod "downwardapi-volume-3b5bbeea-b97c-4523-9f08-ef93c9aa4679": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067775779s Jan 13 07:57:53.655: INFO: Pod "downwardapi-volume-3b5bbeea-b97c-4523-9f08-ef93c9aa4679": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.079346738s STEP: Saw pod success Jan 13 07:57:53.655: INFO: Pod "downwardapi-volume-3b5bbeea-b97c-4523-9f08-ef93c9aa4679" satisfied condition "Succeeded or Failed" Jan 13 07:57:53.749: INFO: Trying to get logs from node leguer-worker2 pod downwardapi-volume-3b5bbeea-b97c-4523-9f08-ef93c9aa4679 container client-container: STEP: delete the pod Jan 13 07:57:53.830: INFO: Waiting for pod downwardapi-volume-3b5bbeea-b97c-4523-9f08-ef93c9aa4679 to disappear Jan 13 07:57:53.880: INFO: Pod downwardapi-volume-3b5bbeea-b97c-4523-9f08-ef93c9aa4679 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:57:53.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-204" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":223,"skipped":3898,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:57:53.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 13 07:57:55.769: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 13 07:57:58.099: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746121475, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746121475, loc:(*time.Location)(0x7089440)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746121475, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746121475, loc:(*time.Location)(0x7089440)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 13 07:58:01.175: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:58:01.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1950" for this suite. STEP: Destroying namespace "webhook-1950-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:7.631 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":309,"completed":224,"skipped":3926,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:58:01.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-3387.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-3387.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3387.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-3387.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-3387.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3387.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 13 07:58:07.861: INFO: DNS probes using dns-3387/dns-test-55363d90-6b15-4b78-8af4-c026c1f493b6 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:58:07.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3387" for this suite. • [SLOW TEST:6.489 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":309,"completed":225,"skipped":3936,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:58:08.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test override command Jan 13 07:58:08.464: INFO: Waiting up to 5m0s for pod "client-containers-774264c2-9b2d-402e-bb1d-823b23c96dcf" in namespace "containers-6669" to be "Succeeded or Failed" Jan 13 07:58:08.482: INFO: Pod "client-containers-774264c2-9b2d-402e-bb1d-823b23c96dcf": Phase="Pending", Reason="", readiness=false. Elapsed: 16.949364ms Jan 13 07:58:10.548: INFO: Pod "client-containers-774264c2-9b2d-402e-bb1d-823b23c96dcf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083700118s Jan 13 07:58:12.555: INFO: Pod "client-containers-774264c2-9b2d-402e-bb1d-823b23c96dcf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090297146s Jan 13 07:58:14.984: INFO: Pod "client-containers-774264c2-9b2d-402e-bb1d-823b23c96dcf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.519485861s STEP: Saw pod success Jan 13 07:58:14.985: INFO: Pod "client-containers-774264c2-9b2d-402e-bb1d-823b23c96dcf" satisfied condition "Succeeded or Failed" Jan 13 07:58:14.991: INFO: Trying to get logs from node leguer-worker pod client-containers-774264c2-9b2d-402e-bb1d-823b23c96dcf container agnhost-container: STEP: delete the pod Jan 13 07:58:15.439: INFO: Waiting for pod client-containers-774264c2-9b2d-402e-bb1d-823b23c96dcf to disappear Jan 13 07:58:15.456: INFO: Pod client-containers-774264c2-9b2d-402e-bb1d-823b23c96dcf no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:58:15.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6669" for this suite. • [SLOW TEST:7.445 seconds] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":309,"completed":226,"skipped":4008,"failed":0} SSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:58:15.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:85 [It] deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 07:58:15.604: INFO: Creating deployment "webserver-deployment" Jan 13 07:58:15.611: INFO: Waiting for observed generation 1 Jan 13 07:58:17.869: INFO: Waiting for all required pods to come up Jan 13 07:58:18.142: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Jan 13 07:58:30.430: INFO: Waiting for deployment "webserver-deployment" to complete Jan 13 07:58:30.440: INFO: Updating deployment "webserver-deployment" with a non-existent image Jan 13 07:58:30.458: INFO: Updating deployment webserver-deployment Jan 13 07:58:30.458: INFO: Waiting for observed generation 2 Jan 13 07:58:32.470: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jan 13 07:58:32.475: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jan 13 07:58:32.480: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Jan 13 07:58:32.492: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jan 13 07:58:32.492: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jan 13 07:58:32.495: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Jan 13 07:58:33.506: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Jan 13 07:58:33.507: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Jan 13 07:58:33.989: INFO: Updating deployment webserver-deployment Jan 13 07:58:33.990: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Jan 13 07:58:34.490: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jan 13 07:58:36.969: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:79 Jan 13 07:58:37.395: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-7245 ba929919-142a-43bb-9a6f-a819ebe416f7 509510 3 2021-01-13 07:58:15 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-01-13 07:58:15 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-01-13 07:58:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x4003abe4e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2021-01-13 07:58:34 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-795d758f88" is progressing.,LastUpdateTime:2021-01-13 07:58:34 +0000 UTC,LastTransitionTime:2021-01-13 07:58:15 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Jan 13 07:58:37.405: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88 deployment-7245 afd8a19d-f948-40cc-aca7-86cedf5778cc 509496 3 2021-01-13 07:58:30 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment ba929919-142a-43bb-9a6f-a819ebe416f7 0x40051cc2e7 0x40051cc2e8}] [] [{kube-controller-manager Update apps/v1 2021-01-13 07:58:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba929919-142a-43bb-9a6f-a819ebe416f7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x40051cc368 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 13 07:58:37.406: INFO: All old ReplicaSets of Deployment "webserver-deployment": Jan 13 07:58:37.406: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-dd94f59b7 deployment-7245 f6af64ef-a736-4af1-b779-769429aa61ae 509506 3 2021-01-13 07:58:15 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment ba929919-142a-43bb-9a6f-a819ebe416f7 0x40051cc3c7 0x40051cc3c8}] [] [{kube-controller-manager Update apps/v1 2021-01-13 07:58:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba929919-142a-43bb-9a6f-a819ebe416f7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: dd94f59b7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x40051cc438 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Jan 13 07:58:37.425: INFO: Pod "webserver-deployment-795d758f88-2hw5t" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-2hw5t webserver-deployment-795d758f88- deployment-7245 d2a0bcdd-5331-432f-a9ac-03263ead1183 509563 0 2021-01-13 07:58:34 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 afd8a19d-f948-40cc-aca7-86cedf5778cc 0x40051cc887 0x40051cc888}] [] [{kube-controller-manager Update v1 2021-01-13 07:58:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"afd8a19d-f948-40cc-aca7-86cedf5778cc\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-13 07:58:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xhxjz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xhxjz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xhxjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2021-01-13 07:58:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 13 07:58:37.426: INFO: Pod "webserver-deployment-795d758f88-4xqnj" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-4xqnj webserver-deployment-795d758f88- deployment-7245 e6d97b96-ec04-43b2-bebc-b02724930267 509565 0 2021-01-13 07:58:30 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 afd8a19d-f948-40cc-aca7-86cedf5778cc 0x40051cca30 0x40051cca31}] [] [{kube-controller-manager Update v1 2021-01-13 07:58:30 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"afd8a19d-f948-40cc-aca7-86cedf5778cc\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-13 07:58:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.144\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xhxjz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xhxjz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xhxjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.1.144,StartTime:2021-01-13 07:58:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.144,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 13 07:58:37.428: INFO: Pod "webserver-deployment-795d758f88-5kcdm" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-5kcdm webserver-deployment-795d758f88- deployment-7245 b97588e1-dea7-4733-bea4-2dade803e129 509566 0 2021-01-13 07:58:30 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 afd8a19d-f948-40cc-aca7-86cedf5778cc 0x40051ccc00 0x40051ccc01}] [] [{kube-controller-manager Update v1 2021-01-13 07:58:30 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"afd8a19d-f948-40cc-aca7-86cedf5778cc\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-13 07:58:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.69\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xhxjz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xhxjz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xhxjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.69,StartTime:2021-01-13 07:58:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.69,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 13 07:58:37.429: INFO: Pod "webserver-deployment-795d758f88-5mqpt" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-5mqpt webserver-deployment-795d758f88- deployment-7245 b28a263e-8862-4aef-b22d-85a21adfd3af 509518 0 2021-01-13 07:58:34 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 afd8a19d-f948-40cc-aca7-86cedf5778cc 0x40051ccdd0 0x40051ccdd1}] [] [{kube-controller-manager Update v1 2021-01-13 07:58:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"afd8a19d-f948-40cc-aca7-86cedf5778cc\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-13 07:58:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xhxjz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xhxjz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xhxjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2021-01-13 07:58:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 13 07:58:37.431: INFO: Pod "webserver-deployment-795d758f88-7g4jp" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-7g4jp webserver-deployment-795d758f88- deployment-7245 70050eea-a1fa-4696-93b2-9fa354645593 509539 0 2021-01-13 07:58:34 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 afd8a19d-f948-40cc-aca7-86cedf5778cc 0x40051ccf70 0x40051ccf71}] [] [{kube-controller-manager Update v1 2021-01-13 07:58:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"afd8a19d-f948-40cc-aca7-86cedf5778cc\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-13 07:58:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xhxjz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xhxjz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xhxjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2021-01-13 07:58:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 13 07:58:37.432: INFO: Pod "webserver-deployment-795d758f88-8mg7d" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-8mg7d webserver-deployment-795d758f88- deployment-7245 066ff50f-8e43-47c4-ba45-8610ffa66593 509426 0 2021-01-13 07:58:30 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 afd8a19d-f948-40cc-aca7-86cedf5778cc 0x40051cd110 0x40051cd111}] [] [{kube-controller-manager Update v1 2021-01-13 07:58:30 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"afd8a19d-f948-40cc-aca7-86cedf5778cc\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-13 07:58:31 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xhxjz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xhxjz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xhxjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2021-01-13 07:58:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 13 07:58:37.433: INFO: Pod "webserver-deployment-795d758f88-9hlrj" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-9hlrj webserver-deployment-795d758f88- deployment-7245 c6ec839c-b8b1-4bfa-b80e-233abb833db1 509545 0 2021-01-13 07:58:34 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 afd8a19d-f948-40cc-aca7-86cedf5778cc 0x40051cd2b0 0x40051cd2b1}] [] [{kube-controller-manager Update v1 2021-01-13 07:58:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"afd8a19d-f948-40cc-aca7-86cedf5778cc\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-13 07:58:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xhxjz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xhxjz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xhxjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2021-01-13 07:58:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 13 07:58:37.434: INFO: Pod "webserver-deployment-795d758f88-bjg2j" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-bjg2j webserver-deployment-795d758f88- deployment-7245 dbe4d25b-79e6-422f-908b-bc305f12d9ec 509559 0 2021-01-13 07:58:34 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 afd8a19d-f948-40cc-aca7-86cedf5778cc 0x40051cd450 0x40051cd451}] [] [{kube-controller-manager Update v1 2021-01-13 07:58:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"afd8a19d-f948-40cc-aca7-86cedf5778cc\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-13 07:58:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xhxjz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xhxjz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xhxjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2021-01-13 07:58:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 13 07:58:37.435: INFO: Pod "webserver-deployment-795d758f88-gxxn2" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-gxxn2 webserver-deployment-795d758f88- deployment-7245 f6858b0c-1e12-4e61-ba44-6dd75f2d5a22 509528 0 2021-01-13 07:58:34 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 afd8a19d-f948-40cc-aca7-86cedf5778cc 0x40051cd5f0 0x40051cd5f1}] [] [{kube-controller-manager Update v1 2021-01-13 07:58:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"afd8a19d-f948-40cc-aca7-86cedf5778cc\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-13 07:58:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xhxjz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xhxjz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xhxjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2021-01-13 07:58:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 13 07:58:37.437: INFO: Pod "webserver-deployment-795d758f88-l8qlz" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-l8qlz webserver-deployment-795d758f88- deployment-7245 babb1812-fe8c-4eb1-a1cd-ca7a662450e8 509423 0 2021-01-13 07:58:30 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 afd8a19d-f948-40cc-aca7-86cedf5778cc 0x40051cd790 0x40051cd791}] [] [{kube-controller-manager Update v1 2021-01-13 07:58:30 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"afd8a19d-f948-40cc-aca7-86cedf5778cc\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-13 07:58:30 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xhxjz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xhxjz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xhxjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2021-01-13 07:58:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 13 07:58:37.438: INFO: Pod "webserver-deployment-795d758f88-nn5kc" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-nn5kc webserver-deployment-795d758f88- deployment-7245 c85352c7-e0e6-412e-9e26-221c2b564dba 509522 0 2021-01-13 07:58:34 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 afd8a19d-f948-40cc-aca7-86cedf5778cc 0x40051cd930 0x40051cd931}] [] [{kube-controller-manager Update v1 2021-01-13 07:58:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"afd8a19d-f948-40cc-aca7-86cedf5778cc\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-13 07:58:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xhxjz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xhxjz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xhxjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2021-01-13 07:58:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 13 07:58:37.439: INFO: Pod "webserver-deployment-795d758f88-p4x94" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-p4x94 webserver-deployment-795d758f88- deployment-7245 33d6aea0-12e4-4f46-b6a9-3da9f1d53020 509417 0 2021-01-13 07:58:30 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 afd8a19d-f948-40cc-aca7-86cedf5778cc 0x40051cdad0 0x40051cdad1}] [] [{kube-controller-manager Update v1 2021-01-13 07:58:30 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"afd8a19d-f948-40cc-aca7-86cedf5778cc\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-13 07:58:30 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xhxjz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xhxjz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xhxjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2021-01-13 07:58:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 13 07:58:37.440: INFO: Pod "webserver-deployment-795d758f88-z9v8q" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-z9v8q webserver-deployment-795d758f88- deployment-7245 d5d379d8-6bf9-4eb0-91b1-b337cf6a4c91 509503 0 2021-01-13 07:58:34 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 afd8a19d-f948-40cc-aca7-86cedf5778cc 0x40051cdc70 0x40051cdc71}] [] [{kube-controller-manager Update v1 2021-01-13 07:58:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"afd8a19d-f948-40cc-aca7-86cedf5778cc\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-13 07:58:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xhxjz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xhxjz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xhxjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2021-01-13 07:58:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 13 07:58:37.441: INFO: Pod "webserver-deployment-dd94f59b7-496jm" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-496jm webserver-deployment-dd94f59b7- deployment-7245 559aa46e-33c5-44db-8a91-d8c2d72426e8 509564 0 2021-01-13 07:58:34 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 f6af64ef-a736-4af1-b779-769429aa61ae 0x40051cde10 0x40051cde11}] [] [{kube-controller-manager Update v1 2021-01-13 07:58:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f6af64ef-a736-4af1-b779-769429aa61ae\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-13 07:58:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xhxjz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xhxjz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xhxjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2021-01-13 07:58:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 13 07:58:37.442: INFO: Pod "webserver-deployment-dd94f59b7-4mhjx" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-4mhjx webserver-deployment-dd94f59b7- deployment-7245 5b8798de-ba10-46c3-90c9-b129d5226713 509509 0 2021-01-13 07:58:34 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 f6af64ef-a736-4af1-b779-769429aa61ae 0x40051cdf97 0x40051cdf98}] [] [{kube-controller-manager Update v1 2021-01-13 07:58:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f6af64ef-a736-4af1-b779-769429aa61ae\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-13 07:58:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xhxjz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xhxjz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xhxjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2021-01-13 07:58:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 13 07:58:37.444: INFO: Pod "webserver-deployment-dd94f59b7-4srrl" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-4srrl webserver-deployment-dd94f59b7- deployment-7245 15870e0f-1c5f-4bf1-a44d-cfc17f044184 509524 0 2021-01-13 07:58:34 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 f6af64ef-a736-4af1-b779-769429aa61ae 0x40049a6137 0x40049a6138}] [] [{kube-controller-manager Update v1 2021-01-13 07:58:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f6af64ef-a736-4af1-b779-769429aa61ae\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-13 07:58:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xhxjz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xhxjz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xhxjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2021-01-13 07:58:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 13 07:58:37.445: INFO: Pod "webserver-deployment-dd94f59b7-6lvjk" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-6lvjk webserver-deployment-dd94f59b7- deployment-7245 59adf34c-09a0-4daa-b49a-ab96fd9ca63e 509355 0 2021-01-13 07:58:15 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 f6af64ef-a736-4af1-b779-769429aa61ae 0x40049a62c7 0x40049a62c8}] [] [{kube-controller-manager Update v1 2021-01-13 07:58:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f6af64ef-a736-4af1-b779-769429aa61ae\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-13 07:58:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.141\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xhxjz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xhxjz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xhxjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.1.141,StartTime:2021-01-13 07:58:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-13 07:58:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://767a9780c232b7adbc6c1df2151e48dbd2c8cbaf2fad9ed4a0f0747a9585bb84,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.141,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 13 07:58:37.447: INFO: Pod "webserver-deployment-dd94f59b7-6p2jw" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-6p2jw webserver-deployment-dd94f59b7- deployment-7245 d9a71968-28fa-4b29-8f9c-c07d594114de 509341 0 2021-01-13 07:58:15 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 f6af64ef-a736-4af1-b779-769429aa61ae 0x40049a6477 0x40049a6478}] [] [{kube-controller-manager Update v1 2021-01-13 07:58:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f6af64ef-a736-4af1-b779-769429aa61ae\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-13 07:58:27 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.139\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xhxjz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xhxjz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xhxjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.1.139,StartTime:2021-01-13 07:58:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-13 07:58:27 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://cc7c0b98a6956404f1bb8b175fff1ce8b2540aaac2148c91ae873283d4b4a6fb,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.139,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 13 07:58:37.448: INFO: Pod "webserver-deployment-dd94f59b7-7d5x8" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-7d5x8 webserver-deployment-dd94f59b7- deployment-7245 c3c2597c-61d6-46c2-91f7-15469c6c73db 509547 0 2021-01-13 07:58:34 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 f6af64ef-a736-4af1-b779-769429aa61ae 0x40049a6627 0x40049a6628}] [] [{kube-controller-manager Update v1 2021-01-13 07:58:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f6af64ef-a736-4af1-b779-769429aa61ae\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-13 07:58:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xhxjz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xhxjz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xhxjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2021-01-13 07:58:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 13 07:58:37.449: INFO: Pod "webserver-deployment-dd94f59b7-bbghz" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-bbghz webserver-deployment-dd94f59b7- deployment-7245 ea665b18-1bf4-4d32-8389-c3143ed4f488 509512 0 2021-01-13 07:58:34 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 f6af64ef-a736-4af1-b779-769429aa61ae 0x40049a67b7 0x40049a67b8}] [] [{kube-controller-manager Update v1 2021-01-13 07:58:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f6af64ef-a736-4af1-b779-769429aa61ae\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-13 07:58:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xhxjz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xhxjz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xhxjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2021-01-13 07:58:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 13 07:58:37.450: INFO: Pod "webserver-deployment-dd94f59b7-ctp6x" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-ctp6x webserver-deployment-dd94f59b7- deployment-7245 c31c4731-84fe-4bdc-86c3-bd51073d06e7 509318 0 2021-01-13 07:58:15 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 f6af64ef-a736-4af1-b779-769429aa61ae 0x40049a6947 0x40049a6948}] [] [{kube-controller-manager Update v1 2021-01-13 07:58:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f6af64ef-a736-4af1-b779-769429aa61ae\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-13 07:58:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.138\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xhxjz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xhxjz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xhxjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.1.138,StartTime:2021-01-13 07:58:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-13 07:58:24 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://564eed0c9ded346abb2182516b546e41aea952c70c2501ead1c56a4d8e1dee95,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.138,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 13 07:58:37.451: INFO: Pod "webserver-deployment-dd94f59b7-dcjtb" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-dcjtb webserver-deployment-dd94f59b7- deployment-7245 88f71af1-105a-47f1-988e-d46859d9808b 509325 0 2021-01-13 07:58:15 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 f6af64ef-a736-4af1-b779-769429aa61ae 0x40049a6af7 0x40049a6af8}] [] [{kube-controller-manager Update v1 2021-01-13 07:58:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f6af64ef-a736-4af1-b779-769429aa61ae\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-13 07:58:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.67\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xhxjz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xhxjz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xhxjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.67,StartTime:2021-01-13 07:58:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-13 07:58:24 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://4bbefe207b5b4936aa03dbf06dc0d0dbd79367b7171a6cd242a2e88b98a2b4ae,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.67,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 13 07:58:37.452: INFO: Pod "webserver-deployment-dd94f59b7-glxkp" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-glxkp webserver-deployment-dd94f59b7- deployment-7245 8cb5485f-a7b8-4529-92a4-fcfdeb8f673f 509500 0 2021-01-13 07:58:34 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 f6af64ef-a736-4af1-b779-769429aa61ae 0x40049a6ca7 0x40049a6ca8}] [] [{kube-controller-manager Update v1 2021-01-13 07:58:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f6af64ef-a736-4af1-b779-769429aa61ae\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-13 07:58:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xhxjz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xhxjz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xhxjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2021-01-13 07:58:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 13 07:58:37.453: INFO: Pod "webserver-deployment-dd94f59b7-gsq7j" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-gsq7j webserver-deployment-dd94f59b7- deployment-7245 076f4b8e-c873-4416-b6ae-5050ab00f46c 509360 0 2021-01-13 07:58:15 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 f6af64ef-a736-4af1-b779-769429aa61ae 0x40049a6e37 0x40049a6e38}] [] [{kube-controller-manager Update v1 2021-01-13 07:58:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f6af64ef-a736-4af1-b779-769429aa61ae\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-13 07:58:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.140\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xhxjz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xhxjz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xhxjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.1.140,StartTime:2021-01-13 07:58:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-13 07:58:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://8995a8fd32a442d2b3c933c5ad3d4e5808c8c40da3c8c32a02b77bc7b0c88b58,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.140,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 13 07:58:37.455: INFO: Pod "webserver-deployment-dd94f59b7-l98z9" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-l98z9 webserver-deployment-dd94f59b7- deployment-7245 6090e614-c963-4334-a533-ff4d72b4fd7f 509298 0 2021-01-13 07:58:15 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 f6af64ef-a736-4af1-b779-769429aa61ae 0x40049a6fe7 0x40049a6fe8}] [] [{kube-controller-manager Update v1 2021-01-13 07:58:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f6af64ef-a736-4af1-b779-769429aa61ae\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-13 07:58:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.65\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xhxjz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xhxjz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xhxjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.65,StartTime:2021-01-13 07:58:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-13 07:58:21 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://b1ec1a349c608eaa25bcd33bed31ab4ce0cd377fdc97f1143a06257f89b4375c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.65,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 13 07:58:37.456: INFO: Pod "webserver-deployment-dd94f59b7-lg4hb" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-lg4hb webserver-deployment-dd94f59b7- deployment-7245 6a8e70b4-7c29-4bfa-ae7a-fa8be6a901ea 509479 0 2021-01-13 07:58:34 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 f6af64ef-a736-4af1-b779-769429aa61ae 0x40049a7197 0x40049a7198}] [] [{kube-controller-manager Update v1 2021-01-13 07:58:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f6af64ef-a736-4af1-b779-769429aa61ae\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-13 07:58:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xhxjz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xhxjz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xhxjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2021-01-13 07:58:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 13 07:58:37.457: INFO: Pod "webserver-deployment-dd94f59b7-mgz8x" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-mgz8x webserver-deployment-dd94f59b7- deployment-7245 182b7253-40a7-468d-a776-e85712f18333 509536 0 2021-01-13 07:58:34 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 f6af64ef-a736-4af1-b779-769429aa61ae 0x40049a7327 0x40049a7328}] [] [{kube-controller-manager Update v1 2021-01-13 07:58:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f6af64ef-a736-4af1-b779-769429aa61ae\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-13 07:58:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xhxjz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xhxjz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xhxjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2021-01-13 07:58:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 13 07:58:37.458: INFO: Pod "webserver-deployment-dd94f59b7-nbq6r" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-nbq6r webserver-deployment-dd94f59b7- deployment-7245 de3b4a5d-23d5-4a7e-bb3b-d1a4697bd1e9 509516 0 2021-01-13 07:58:34 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 f6af64ef-a736-4af1-b779-769429aa61ae 0x40049a74b7 0x40049a74b8}] [] [{kube-controller-manager Update v1 2021-01-13 07:58:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f6af64ef-a736-4af1-b779-769429aa61ae\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-13 07:58:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xhxjz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xhxjz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xhxjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2021-01-13 07:58:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 13 07:58:37.459: INFO: Pod "webserver-deployment-dd94f59b7-nnrj6" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-nnrj6 webserver-deployment-dd94f59b7- deployment-7245 9082f08f-4e09-4381-9411-bad3fbf4636b 509532 0 2021-01-13 07:58:34 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 f6af64ef-a736-4af1-b779-769429aa61ae 0x40049a7647 0x40049a7648}] [] [{kube-controller-manager Update v1 2021-01-13 07:58:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f6af64ef-a736-4af1-b779-769429aa61ae\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-13 07:58:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xhxjz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xhxjz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xhxjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2021-01-13 07:58:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 13 07:58:37.461: INFO: Pod "webserver-deployment-dd94f59b7-pplbt" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-pplbt webserver-deployment-dd94f59b7- deployment-7245 c51ef0d8-eac8-44ed-930f-b27ae15434cc 509348 0 2021-01-13 07:58:15 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 f6af64ef-a736-4af1-b779-769429aa61ae 0x40049a77e7 0x40049a77e8}] [] [{kube-controller-manager Update v1 2021-01-13 07:58:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f6af64ef-a736-4af1-b779-769429aa61ae\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-13 07:58:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.68\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xhxjz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xhxjz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xhxjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.68,StartTime:2021-01-13 07:58:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-13 07:58:27 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://7b7ec3527ec2ef4ed1307f1653a038f6cda8ae3f9672a6b5715ab92a8ae73136,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.68,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 13 07:58:37.462: INFO: Pod "webserver-deployment-dd94f59b7-pwn72" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-pwn72 webserver-deployment-dd94f59b7- deployment-7245 d8d1e4fa-ce73-4e2c-bb50-904002a4ea48 509334 0 2021-01-13 07:58:15 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 f6af64ef-a736-4af1-b779-769429aa61ae 0x40049a7997 0x40049a7998}] [] [{kube-controller-manager Update v1 2021-01-13 07:58:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f6af64ef-a736-4af1-b779-769429aa61ae\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-13 07:58:27 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.66\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xhxjz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xhxjz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xhxjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.66,StartTime:2021-01-13 07:58:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-13 07:58:26 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://2676a1ed9fe2da7315a00ec09e084ccce21ca4cc6834be84d2c48bd2bc1ba9de,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.66,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 13 07:58:37.463: INFO: Pod "webserver-deployment-dd94f59b7-rl8gh" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-rl8gh webserver-deployment-dd94f59b7- deployment-7245 34c65c86-7a72-4d88-9095-02ef7392d282 509557 0 2021-01-13 07:58:34 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 f6af64ef-a736-4af1-b779-769429aa61ae 0x40049a7b47 0x40049a7b48}] [] [{kube-controller-manager Update v1 2021-01-13 07:58:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f6af64ef-a736-4af1-b779-769429aa61ae\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-13 07:58:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xhxjz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xhxjz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xhxjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2021-01-13 07:58:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 13 07:58:37.464: INFO: Pod "webserver-deployment-dd94f59b7-zqhhz" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-zqhhz webserver-deployment-dd94f59b7- deployment-7245 1797d0b0-858f-4fe7-8e53-22ac30f449be 509493 0 2021-01-13 07:58:34 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 f6af64ef-a736-4af1-b779-769429aa61ae 0x40049a7cd7 0x40049a7cd8}] [] [{kube-controller-manager Update v1 2021-01-13 07:58:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f6af64ef-a736-4af1-b779-769429aa61ae\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-13 07:58:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xhxjz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xhxjz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xhxjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 07:58:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2021-01-13 07:58:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:58:37.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7245" for this suite. • [SLOW TEST:22.003 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":309,"completed":227,"skipped":4011,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:58:37.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1499.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-1499.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1499.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-1499.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 13 07:59:02.802: INFO: DNS probes using dns-test-934b903a-4978-4e98-8285-6be5cfaddaef succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1499.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-1499.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1499.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-1499.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 13 07:59:09.465: INFO: File wheezy_udp@dns-test-service-3.dns-1499.svc.cluster.local from pod dns-1499/dns-test-e4291e6e-90e5-47ae-af11-f8fde1f5d00e contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 13 07:59:09.470: INFO: File jessie_udp@dns-test-service-3.dns-1499.svc.cluster.local from pod dns-1499/dns-test-e4291e6e-90e5-47ae-af11-f8fde1f5d00e contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 13 07:59:09.470: INFO: Lookups using dns-1499/dns-test-e4291e6e-90e5-47ae-af11-f8fde1f5d00e failed for: [wheezy_udp@dns-test-service-3.dns-1499.svc.cluster.local jessie_udp@dns-test-service-3.dns-1499.svc.cluster.local] Jan 13 07:59:14.478: INFO: File wheezy_udp@dns-test-service-3.dns-1499.svc.cluster.local from pod dns-1499/dns-test-e4291e6e-90e5-47ae-af11-f8fde1f5d00e contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 13 07:59:14.484: INFO: File jessie_udp@dns-test-service-3.dns-1499.svc.cluster.local from pod dns-1499/dns-test-e4291e6e-90e5-47ae-af11-f8fde1f5d00e contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 13 07:59:14.484: INFO: Lookups using dns-1499/dns-test-e4291e6e-90e5-47ae-af11-f8fde1f5d00e failed for: [wheezy_udp@dns-test-service-3.dns-1499.svc.cluster.local jessie_udp@dns-test-service-3.dns-1499.svc.cluster.local] Jan 13 07:59:19.477: INFO: File wheezy_udp@dns-test-service-3.dns-1499.svc.cluster.local from pod dns-1499/dns-test-e4291e6e-90e5-47ae-af11-f8fde1f5d00e contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 13 07:59:19.482: INFO: File jessie_udp@dns-test-service-3.dns-1499.svc.cluster.local from pod dns-1499/dns-test-e4291e6e-90e5-47ae-af11-f8fde1f5d00e contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 13 07:59:19.482: INFO: Lookups using dns-1499/dns-test-e4291e6e-90e5-47ae-af11-f8fde1f5d00e failed for: [wheezy_udp@dns-test-service-3.dns-1499.svc.cluster.local jessie_udp@dns-test-service-3.dns-1499.svc.cluster.local] Jan 13 07:59:24.478: INFO: File wheezy_udp@dns-test-service-3.dns-1499.svc.cluster.local from pod dns-1499/dns-test-e4291e6e-90e5-47ae-af11-f8fde1f5d00e contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 13 07:59:24.483: INFO: File jessie_udp@dns-test-service-3.dns-1499.svc.cluster.local from pod dns-1499/dns-test-e4291e6e-90e5-47ae-af11-f8fde1f5d00e contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 13 07:59:24.483: INFO: Lookups using dns-1499/dns-test-e4291e6e-90e5-47ae-af11-f8fde1f5d00e failed for: [wheezy_udp@dns-test-service-3.dns-1499.svc.cluster.local jessie_udp@dns-test-service-3.dns-1499.svc.cluster.local] Jan 13 07:59:29.481: INFO: DNS probes using dns-test-e4291e6e-90e5-47ae-af11-f8fde1f5d00e succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1499.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-1499.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1499.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-1499.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 13 07:59:38.858: INFO: DNS probes using dns-test-10d95e93-f1bb-4393-aa57-40ade1d1597b succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:59:39.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1499" for this suite. • [SLOW TEST:61.841 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":309,"completed":228,"skipped":4048,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:59:39.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating the pod Jan 13 07:59:44.141: INFO: Successfully updated pod "annotationupdatee5842574-03aa-4350-8c6b-41487ff70dec" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 07:59:48.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5795" for this suite. • [SLOW TEST:8.895 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":309,"completed":229,"skipped":4137,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 07:59:48.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0113 07:59:59.560242 10 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Jan 13 08:01:01.586: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Jan 13 08:01:01.587: INFO: Deleting pod "simpletest-rc-to-be-deleted-65b62" in namespace "gc-8880" Jan 13 08:01:01.626: INFO: Deleting pod "simpletest-rc-to-be-deleted-7dl5n" in namespace "gc-8880" Jan 13 08:01:01.680: INFO: Deleting pod "simpletest-rc-to-be-deleted-gwrcz" in namespace "gc-8880" Jan 13 08:01:02.052: INFO: Deleting pod "simpletest-rc-to-be-deleted-hn5sr" in namespace "gc-8880" Jan 13 08:01:02.451: INFO: Deleting pod "simpletest-rc-to-be-deleted-lq5jp" in namespace "gc-8880" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:01:02.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8880" for this suite. • [SLOW TEST:74.466 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":309,"completed":230,"skipped":4145,"failed":0} [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:01:02.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating pod liveness-6e3870f6-28b8-477f-9899-6250fd75436b in namespace container-probe-808 Jan 13 08:01:07.073: INFO: Started pod liveness-6e3870f6-28b8-477f-9899-6250fd75436b in namespace container-probe-808 STEP: checking the pod's current state and verifying that restartCount is present Jan 13 08:01:07.078: INFO: Initial restart count of pod liveness-6e3870f6-28b8-477f-9899-6250fd75436b is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:05:08.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-808" for this suite. • [SLOW TEST:245.483 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":309,"completed":231,"skipped":4145,"failed":0} S ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:05:08.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating service nodeport-test with type=NodePort in namespace services-7171 STEP: creating replication controller nodeport-test in namespace services-7171 I0113 08:05:08.898718 10 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-7171, replica count: 2 I0113 08:05:11.950463 10 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0113 08:05:14.951310 10 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 13 08:05:14.951: INFO: Creating new exec pod Jan 13 08:05:19.989: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-7171 exec execpod9gsql -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Jan 13 08:05:25.572: INFO: stderr: "I0113 08:05:25.461117 3533 log.go:181] (0x40006d0c60) (0x400096df40) Create stream\nI0113 08:05:25.465500 3533 log.go:181] (0x40006d0c60) (0x400096df40) Stream added, broadcasting: 1\nI0113 08:05:25.475131 3533 log.go:181] (0x40006d0c60) Reply frame received for 1\nI0113 08:05:25.475665 3533 log.go:181] (0x40006d0c60) (0x40005c01e0) Create stream\nI0113 08:05:25.475729 3533 log.go:181] (0x40006d0c60) (0x40005c01e0) Stream added, broadcasting: 3\nI0113 08:05:25.477067 3533 log.go:181] (0x40006d0c60) Reply frame received for 3\nI0113 08:05:25.477322 3533 log.go:181] (0x40006d0c60) (0x40009ba000) Create stream\nI0113 08:05:25.477383 3533 log.go:181] (0x40006d0c60) (0x40009ba000) Stream added, broadcasting: 5\nI0113 08:05:25.478608 3533 log.go:181] (0x40006d0c60) Reply frame received for 5\nI0113 08:05:25.552737 3533 log.go:181] (0x40006d0c60) Data frame received for 5\nI0113 08:05:25.553336 3533 log.go:181] (0x40006d0c60) Data frame received for 3\nI0113 08:05:25.553549 3533 log.go:181] (0x40009ba000) (5) Data frame handling\nI0113 08:05:25.553896 3533 log.go:181] (0x40005c01e0) (3) Data frame handling\nI0113 08:05:25.554458 3533 log.go:181] (0x40006d0c60) Data frame received for 1\nI0113 08:05:25.554594 3533 log.go:181] (0x400096df40) (1) Data frame handling\nI0113 08:05:25.556188 3533 log.go:181] (0x40009ba000) (5) Data frame sent\nI0113 08:05:25.556281 3533 log.go:181] (0x400096df40) (1) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0113 08:05:25.557028 3533 log.go:181] (0x40006d0c60) Data frame received for 5\nI0113 08:05:25.557131 3533 log.go:181] (0x40009ba000) (5) Data frame handling\nI0113 08:05:25.557262 3533 log.go:181] (0x40009ba000) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0113 08:05:25.557353 3533 log.go:181] (0x40006d0c60) Data frame received for 5\nI0113 08:05:25.557433 3533 log.go:181] (0x40009ba000) (5) Data frame handling\nI0113 08:05:25.558145 3533 log.go:181] (0x40006d0c60) (0x400096df40) Stream removed, broadcasting: 1\nI0113 08:05:25.560974 3533 log.go:181] (0x40006d0c60) Go away received\nI0113 08:05:25.563065 3533 log.go:181] (0x40006d0c60) (0x400096df40) Stream removed, broadcasting: 1\nI0113 08:05:25.563483 3533 log.go:181] (0x40006d0c60) (0x40005c01e0) Stream removed, broadcasting: 3\nI0113 08:05:25.563736 3533 log.go:181] (0x40006d0c60) (0x40009ba000) Stream removed, broadcasting: 5\n" Jan 13 08:05:25.573: INFO: stdout: "" Jan 13 08:05:25.580: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-7171 exec execpod9gsql -- /bin/sh -x -c nc -zv -t -w 2 10.96.58.183 80' Jan 13 08:05:27.182: INFO: stderr: "I0113 08:05:27.033005 3553 log.go:181] (0x40003c8000) (0x4000c88000) Create stream\nI0113 08:05:27.037259 3553 log.go:181] (0x40003c8000) (0x4000c88000) Stream added, broadcasting: 1\nI0113 08:05:27.056026 3553 log.go:181] (0x40003c8000) Reply frame received for 1\nI0113 08:05:27.057729 3553 log.go:181] (0x40003c8000) (0x4000d90500) Create stream\nI0113 08:05:27.057847 3553 log.go:181] (0x40003c8000) (0x4000d90500) Stream added, broadcasting: 3\nI0113 08:05:27.059430 3553 log.go:181] (0x40003c8000) Reply frame received for 3\nI0113 08:05:27.059654 3553 log.go:181] (0x40003c8000) (0x4000c880a0) Create stream\nI0113 08:05:27.059710 3553 log.go:181] (0x40003c8000) (0x4000c880a0) Stream added, broadcasting: 5\nI0113 08:05:27.060701 3553 log.go:181] (0x40003c8000) Reply frame received for 5\nI0113 08:05:27.161131 3553 log.go:181] (0x40003c8000) Data frame received for 5\nI0113 08:05:27.161400 3553 log.go:181] (0x40003c8000) Data frame received for 3\nI0113 08:05:27.161926 3553 log.go:181] (0x4000c880a0) (5) Data frame handling\nI0113 08:05:27.162600 3553 log.go:181] (0x4000d90500) (3) Data frame handling\nI0113 08:05:27.163318 3553 log.go:181] (0x40003c8000) Data frame received for 1\nI0113 08:05:27.163430 3553 log.go:181] (0x4000c88000) (1) Data frame handling\n+ nc -zv -t -w 2 10.96.58.183 80\nConnection to 10.96.58.183 80 port [tcp/http] succeeded!\nI0113 08:05:27.166321 3553 log.go:181] (0x4000c88000) (1) Data frame sent\nI0113 08:05:27.166608 3553 log.go:181] (0x4000c880a0) (5) Data frame sent\nI0113 08:05:27.168099 3553 log.go:181] (0x40003c8000) Data frame received for 5\nI0113 08:05:27.168547 3553 log.go:181] (0x40003c8000) (0x4000c88000) Stream removed, broadcasting: 1\nI0113 08:05:27.169467 3553 log.go:181] (0x4000c880a0) (5) Data frame handling\nI0113 08:05:27.170464 3553 log.go:181] (0x40003c8000) Go away received\nI0113 08:05:27.173286 3553 log.go:181] (0x40003c8000) (0x4000c88000) Stream removed, broadcasting: 1\nI0113 08:05:27.173627 3553 log.go:181] (0x40003c8000) (0x4000d90500) Stream removed, broadcasting: 3\nI0113 08:05:27.173842 3553 log.go:181] (0x40003c8000) (0x4000c880a0) Stream removed, broadcasting: 5\n" Jan 13 08:05:27.183: INFO: stdout: "" Jan 13 08:05:27.183: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-7171 exec execpod9gsql -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.13 32324' Jan 13 08:05:28.768: INFO: stderr: "I0113 08:05:28.639549 3574 log.go:181] (0x40007c0840) (0x4000b083c0) Create stream\nI0113 08:05:28.644105 3574 log.go:181] (0x40007c0840) (0x4000b083c0) Stream added, broadcasting: 1\nI0113 08:05:28.655522 3574 log.go:181] (0x40007c0840) Reply frame received for 1\nI0113 08:05:28.656088 3574 log.go:181] (0x40007c0840) (0x4000734000) Create stream\nI0113 08:05:28.656150 3574 log.go:181] (0x40007c0840) (0x4000734000) Stream added, broadcasting: 3\nI0113 08:05:28.657987 3574 log.go:181] (0x40007c0840) Reply frame received for 3\nI0113 08:05:28.658426 3574 log.go:181] (0x40007c0840) (0x40006be320) Create stream\nI0113 08:05:28.658531 3574 log.go:181] (0x40007c0840) (0x40006be320) Stream added, broadcasting: 5\nI0113 08:05:28.659986 3574 log.go:181] (0x40007c0840) Reply frame received for 5\nI0113 08:05:28.748196 3574 log.go:181] (0x40007c0840) Data frame received for 3\nI0113 08:05:28.748489 3574 log.go:181] (0x40007c0840) Data frame received for 1\nI0113 08:05:28.748645 3574 log.go:181] (0x4000b083c0) (1) Data frame handling\nI0113 08:05:28.748945 3574 log.go:181] (0x40007c0840) Data frame received for 5\nI0113 08:05:28.749021 3574 log.go:181] (0x40006be320) (5) Data frame handling\nI0113 08:05:28.749286 3574 log.go:181] (0x4000734000) (3) Data frame handling\nI0113 08:05:28.749659 3574 log.go:181] (0x40006be320) (5) Data frame sent\nI0113 08:05:28.749829 3574 log.go:181] (0x4000b083c0) (1) Data frame sent\n+ nc -zv -t -w 2 172.18.0.13 32324\nI0113 08:05:28.752235 3574 log.go:181] (0x40007c0840) Data frame received for 5\nI0113 08:05:28.752788 3574 log.go:181] (0x40007c0840) (0x4000b083c0) Stream removed, broadcasting: 1\nI0113 08:05:28.754415 3574 log.go:181] (0x40006be320) (5) Data frame handling\nI0113 08:05:28.754517 3574 log.go:181] (0x40006be320) (5) Data frame sent\nI0113 08:05:28.754592 3574 log.go:181] (0x40007c0840) Data frame received for 5\nConnection to 172.18.0.13 32324 port [tcp/32324] succeeded!\nI0113 08:05:28.754654 3574 log.go:181] (0x40006be320) (5) Data frame handling\nI0113 08:05:28.755335 3574 log.go:181] (0x40007c0840) Go away received\nI0113 08:05:28.759453 3574 log.go:181] (0x40007c0840) (0x4000b083c0) Stream removed, broadcasting: 1\nI0113 08:05:28.759813 3574 log.go:181] (0x40007c0840) (0x4000734000) Stream removed, broadcasting: 3\nI0113 08:05:28.760107 3574 log.go:181] (0x40007c0840) (0x40006be320) Stream removed, broadcasting: 5\n" Jan 13 08:05:28.769: INFO: stdout: "" Jan 13 08:05:28.770: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-7171 exec execpod9gsql -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.12 32324' Jan 13 08:05:30.463: INFO: stderr: "I0113 08:05:30.328283 3595 log.go:181] (0x400003a0b0) (0x4000c380a0) Create stream\nI0113 08:05:30.331543 3595 log.go:181] (0x400003a0b0) (0x4000c380a0) Stream added, broadcasting: 1\nI0113 08:05:30.345317 3595 log.go:181] (0x400003a0b0) Reply frame received for 1\nI0113 08:05:30.346615 3595 log.go:181] (0x400003a0b0) (0x4000c80000) Create stream\nI0113 08:05:30.346736 3595 log.go:181] (0x400003a0b0) (0x4000c80000) Stream added, broadcasting: 3\nI0113 08:05:30.348548 3595 log.go:181] (0x400003a0b0) Reply frame received for 3\nI0113 08:05:30.349126 3595 log.go:181] (0x400003a0b0) (0x4000c38140) Create stream\nI0113 08:05:30.349240 3595 log.go:181] (0x400003a0b0) (0x4000c38140) Stream added, broadcasting: 5\nI0113 08:05:30.351304 3595 log.go:181] (0x400003a0b0) Reply frame received for 5\nI0113 08:05:30.446163 3595 log.go:181] (0x400003a0b0) Data frame received for 3\nI0113 08:05:30.446462 3595 log.go:181] (0x400003a0b0) Data frame received for 1\nI0113 08:05:30.446578 3595 log.go:181] (0x4000c380a0) (1) Data frame handling\nI0113 08:05:30.446938 3595 log.go:181] (0x4000c80000) (3) Data frame handling\nI0113 08:05:30.447131 3595 log.go:181] (0x400003a0b0) Data frame received for 5\nI0113 08:05:30.447205 3595 log.go:181] (0x4000c38140) (5) Data frame handling\nI0113 08:05:30.448487 3595 log.go:181] (0x4000c380a0) (1) Data frame sent\nI0113 08:05:30.448691 3595 log.go:181] (0x4000c38140) (5) Data frame sent\nI0113 08:05:30.448771 3595 log.go:181] (0x400003a0b0) Data frame received for 5\nI0113 08:05:30.448917 3595 log.go:181] (0x4000c38140) (5) Data frame handling\nI0113 08:05:30.449034 3595 log.go:181] (0x400003a0b0) (0x4000c380a0) Stream removed, broadcasting: 1\n+ nc -zv -t -w 2 172.18.0.12 32324\nConnection to 172.18.0.12 32324 port [tcp/32324] succeeded!\nI0113 08:05:30.451766 3595 log.go:181] (0x400003a0b0) Go away received\nI0113 08:05:30.455085 3595 log.go:181] (0x400003a0b0) (0x4000c380a0) Stream removed, broadcasting: 1\nI0113 08:05:30.455479 3595 log.go:181] (0x400003a0b0) (0x4000c80000) Stream removed, broadcasting: 3\nI0113 08:05:30.455714 3595 log.go:181] (0x400003a0b0) (0x4000c38140) Stream removed, broadcasting: 5\n" Jan 13 08:05:30.464: INFO: stdout: "" [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:05:30.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7171" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:22.310 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":309,"completed":232,"skipped":4146,"failed":0} [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:05:30.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Jan 13 08:05:30.660: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3129 b8c41d20-5ab7-4ef2-95ee-3f13d5e04aa7 510987 0 2021-01-13 08:05:30 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-01-13 08:05:30 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jan 13 08:05:30.661: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3129 b8c41d20-5ab7-4ef2-95ee-3f13d5e04aa7 510988 0 2021-01-13 08:05:30 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-01-13 08:05:30 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 13 08:05:30.662: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3129 b8c41d20-5ab7-4ef2-95ee-3f13d5e04aa7 510989 0 2021-01-13 08:05:30 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-01-13 08:05:30 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Jan 13 08:05:40.748: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3129 b8c41d20-5ab7-4ef2-95ee-3f13d5e04aa7 511041 0 2021-01-13 08:05:30 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-01-13 08:05:30 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 13 08:05:40.749: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3129 b8c41d20-5ab7-4ef2-95ee-3f13d5e04aa7 511042 0 2021-01-13 08:05:30 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-01-13 08:05:30 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 13 08:05:40.750: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3129 b8c41d20-5ab7-4ef2-95ee-3f13d5e04aa7 511043 0 2021-01-13 08:05:30 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-01-13 08:05:30 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:05:40.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3129" for this suite. • [SLOW TEST:10.282 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":309,"completed":233,"skipped":4146,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:05:40.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 13 08:05:44.001: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 13 08:05:46.062: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746121944, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746121944, loc:(*time.Location)(0x7089440)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746121944, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746121943, loc:(*time.Location)(0x7089440)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 13 08:05:49.108: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Jan 13 08:05:57.188: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=webhook-5329 attach --namespace=webhook-5329 to-be-attached-pod -i -c=container1' Jan 13 08:05:58.707: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:05:58.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5329" for this suite. STEP: Destroying namespace "webhook-5329-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:18.081 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":309,"completed":234,"skipped":4153,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:05:58.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward api env vars Jan 13 08:05:58.958: INFO: Waiting up to 5m0s for pod "downward-api-b6844f1f-b2bd-49b9-a912-7abefb68ad1c" in namespace "downward-api-4051" to be "Succeeded or Failed" Jan 13 08:05:58.977: INFO: Pod "downward-api-b6844f1f-b2bd-49b9-a912-7abefb68ad1c": Phase="Pending", Reason="", readiness=false. Elapsed: 19.191574ms Jan 13 08:06:01.018: INFO: Pod "downward-api-b6844f1f-b2bd-49b9-a912-7abefb68ad1c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059460619s Jan 13 08:06:03.026: INFO: Pod "downward-api-b6844f1f-b2bd-49b9-a912-7abefb68ad1c": Phase="Running", Reason="", readiness=true. Elapsed: 4.067562206s Jan 13 08:06:05.033: INFO: Pod "downward-api-b6844f1f-b2bd-49b9-a912-7abefb68ad1c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.074314382s STEP: Saw pod success Jan 13 08:06:05.033: INFO: Pod "downward-api-b6844f1f-b2bd-49b9-a912-7abefb68ad1c" satisfied condition "Succeeded or Failed" Jan 13 08:06:05.044: INFO: Trying to get logs from node leguer-worker pod downward-api-b6844f1f-b2bd-49b9-a912-7abefb68ad1c container dapi-container: STEP: delete the pod Jan 13 08:06:05.126: INFO: Waiting for pod downward-api-b6844f1f-b2bd-49b9-a912-7abefb68ad1c to disappear Jan 13 08:06:05.134: INFO: Pod downward-api-b6844f1f-b2bd-49b9-a912-7abefb68ad1c no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:06:05.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4051" for this suite. • [SLOW TEST:6.300 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":309,"completed":235,"skipped":4168,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:06:05.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating pod liveness-3b43baf7-5f6b-4d72-8f7d-6f852b9f2eab in namespace container-probe-2110 Jan 13 08:06:09.356: INFO: Started pod liveness-3b43baf7-5f6b-4d72-8f7d-6f852b9f2eab in namespace container-probe-2110 STEP: checking the pod's current state and verifying that restartCount is present Jan 13 08:06:09.362: INFO: Initial restart count of pod liveness-3b43baf7-5f6b-4d72-8f7d-6f852b9f2eab is 0 Jan 13 08:06:31.587: INFO: Restart count of pod container-probe-2110/liveness-3b43baf7-5f6b-4d72-8f7d-6f852b9f2eab is now 1 (22.225113829s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:06:31.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2110" for this suite. • [SLOW TEST:26.510 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":309,"completed":236,"skipped":4185,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:06:31.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Jan 13 08:06:32.087: INFO: Waiting up to 5m0s for pod "downwardapi-volume-24c509c1-a8db-4b50-8672-10c6d31b3d31" in namespace "downward-api-4508" to be "Succeeded or Failed" Jan 13 08:06:32.100: INFO: Pod "downwardapi-volume-24c509c1-a8db-4b50-8672-10c6d31b3d31": Phase="Pending", Reason="", readiness=false. Elapsed: 12.584368ms Jan 13 08:06:34.514: INFO: Pod "downwardapi-volume-24c509c1-a8db-4b50-8672-10c6d31b3d31": Phase="Pending", Reason="", readiness=false. Elapsed: 2.427303069s Jan 13 08:06:36.522: INFO: Pod "downwardapi-volume-24c509c1-a8db-4b50-8672-10c6d31b3d31": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.434892457s STEP: Saw pod success Jan 13 08:06:36.522: INFO: Pod "downwardapi-volume-24c509c1-a8db-4b50-8672-10c6d31b3d31" satisfied condition "Succeeded or Failed" Jan 13 08:06:36.527: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-24c509c1-a8db-4b50-8672-10c6d31b3d31 container client-container: STEP: delete the pod Jan 13 08:06:36.678: INFO: Waiting for pod downwardapi-volume-24c509c1-a8db-4b50-8672-10c6d31b3d31 to disappear Jan 13 08:06:36.760: INFO: Pod downwardapi-volume-24c509c1-a8db-4b50-8672-10c6d31b3d31 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:06:36.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4508" for this suite. • [SLOW TEST:5.132 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":309,"completed":237,"skipped":4188,"failed":0} SSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:06:36.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2699.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-2699.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2699.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2699.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-2699.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-2699.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-2699.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-2699.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2699.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2699.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-2699.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2699.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-2699.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-2699.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-2699.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-2699.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-2699.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2699.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 13 08:06:45.014: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2699.svc.cluster.local from pod dns-2699/dns-test-7fc2db82-432c-4262-8796-d623822a1081: the server could not find the requested resource (get pods dns-test-7fc2db82-432c-4262-8796-d623822a1081) Jan 13 08:06:45.019: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2699.svc.cluster.local from pod dns-2699/dns-test-7fc2db82-432c-4262-8796-d623822a1081: the server could not find the requested resource (get pods dns-test-7fc2db82-432c-4262-8796-d623822a1081) Jan 13 08:06:45.027: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2699.svc.cluster.local from pod dns-2699/dns-test-7fc2db82-432c-4262-8796-d623822a1081: the server could not find the requested resource (get pods dns-test-7fc2db82-432c-4262-8796-d623822a1081) Jan 13 08:06:45.031: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2699.svc.cluster.local from pod dns-2699/dns-test-7fc2db82-432c-4262-8796-d623822a1081: the server could not find the requested resource (get pods dns-test-7fc2db82-432c-4262-8796-d623822a1081) Jan 13 08:06:45.042: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2699.svc.cluster.local from pod dns-2699/dns-test-7fc2db82-432c-4262-8796-d623822a1081: the server could not find the requested resource (get pods dns-test-7fc2db82-432c-4262-8796-d623822a1081) Jan 13 08:06:45.046: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2699.svc.cluster.local from pod dns-2699/dns-test-7fc2db82-432c-4262-8796-d623822a1081: the server could not find the requested resource (get pods dns-test-7fc2db82-432c-4262-8796-d623822a1081) Jan 13 08:06:45.049: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2699.svc.cluster.local from pod dns-2699/dns-test-7fc2db82-432c-4262-8796-d623822a1081: the server could not find the requested resource (get pods dns-test-7fc2db82-432c-4262-8796-d623822a1081) Jan 13 08:06:45.053: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2699.svc.cluster.local from pod dns-2699/dns-test-7fc2db82-432c-4262-8796-d623822a1081: the server could not find the requested resource (get pods dns-test-7fc2db82-432c-4262-8796-d623822a1081) Jan 13 08:06:45.061: INFO: Lookups using dns-2699/dns-test-7fc2db82-432c-4262-8796-d623822a1081 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2699.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2699.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2699.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2699.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2699.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2699.svc.cluster.local jessie_udp@dns-test-service-2.dns-2699.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2699.svc.cluster.local] Jan 13 08:06:50.069: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2699.svc.cluster.local from pod dns-2699/dns-test-7fc2db82-432c-4262-8796-d623822a1081: the server could not find the requested resource (get pods dns-test-7fc2db82-432c-4262-8796-d623822a1081) Jan 13 08:06:50.074: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2699.svc.cluster.local from pod dns-2699/dns-test-7fc2db82-432c-4262-8796-d623822a1081: the server could not find the requested resource (get pods dns-test-7fc2db82-432c-4262-8796-d623822a1081) Jan 13 08:06:50.079: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2699.svc.cluster.local from pod dns-2699/dns-test-7fc2db82-432c-4262-8796-d623822a1081: the server could not find the requested resource (get pods dns-test-7fc2db82-432c-4262-8796-d623822a1081) Jan 13 08:06:50.094: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2699.svc.cluster.local from pod dns-2699/dns-test-7fc2db82-432c-4262-8796-d623822a1081: the server could not find the requested resource (get pods dns-test-7fc2db82-432c-4262-8796-d623822a1081) Jan 13 08:06:50.105: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2699.svc.cluster.local from pod dns-2699/dns-test-7fc2db82-432c-4262-8796-d623822a1081: the server could not find the requested resource (get pods dns-test-7fc2db82-432c-4262-8796-d623822a1081) Jan 13 08:06:50.108: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2699.svc.cluster.local from pod dns-2699/dns-test-7fc2db82-432c-4262-8796-d623822a1081: the server could not find the requested resource (get pods dns-test-7fc2db82-432c-4262-8796-d623822a1081) Jan 13 08:06:50.111: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2699.svc.cluster.local from pod dns-2699/dns-test-7fc2db82-432c-4262-8796-d623822a1081: the server could not find the requested resource (get pods dns-test-7fc2db82-432c-4262-8796-d623822a1081) Jan 13 08:06:50.115: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2699.svc.cluster.local from pod dns-2699/dns-test-7fc2db82-432c-4262-8796-d623822a1081: the server could not find the requested resource (get pods dns-test-7fc2db82-432c-4262-8796-d623822a1081) Jan 13 08:06:50.122: INFO: Lookups using dns-2699/dns-test-7fc2db82-432c-4262-8796-d623822a1081 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2699.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2699.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2699.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2699.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2699.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2699.svc.cluster.local jessie_udp@dns-test-service-2.dns-2699.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2699.svc.cluster.local] Jan 13 08:06:55.069: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2699.svc.cluster.local from pod dns-2699/dns-test-7fc2db82-432c-4262-8796-d623822a1081: the server could not find the requested resource (get pods dns-test-7fc2db82-432c-4262-8796-d623822a1081) Jan 13 08:06:55.074: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2699.svc.cluster.local from pod dns-2699/dns-test-7fc2db82-432c-4262-8796-d623822a1081: the server could not find the requested resource (get pods dns-test-7fc2db82-432c-4262-8796-d623822a1081) Jan 13 08:06:55.078: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2699.svc.cluster.local from pod dns-2699/dns-test-7fc2db82-432c-4262-8796-d623822a1081: the server could not find the requested resource (get pods dns-test-7fc2db82-432c-4262-8796-d623822a1081) Jan 13 08:06:55.082: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2699.svc.cluster.local from pod dns-2699/dns-test-7fc2db82-432c-4262-8796-d623822a1081: the server could not find the requested resource (get pods dns-test-7fc2db82-432c-4262-8796-d623822a1081) Jan 13 08:06:55.092: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2699.svc.cluster.local from pod dns-2699/dns-test-7fc2db82-432c-4262-8796-d623822a1081: the server could not find the requested resource (get pods dns-test-7fc2db82-432c-4262-8796-d623822a1081) Jan 13 08:06:55.096: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2699.svc.cluster.local from pod dns-2699/dns-test-7fc2db82-432c-4262-8796-d623822a1081: the server could not find the requested resource (get pods dns-test-7fc2db82-432c-4262-8796-d623822a1081) Jan 13 08:06:55.100: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2699.svc.cluster.local from pod dns-2699/dns-test-7fc2db82-432c-4262-8796-d623822a1081: the server could not find the requested resource (get pods dns-test-7fc2db82-432c-4262-8796-d623822a1081) Jan 13 08:06:55.104: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2699.svc.cluster.local from pod dns-2699/dns-test-7fc2db82-432c-4262-8796-d623822a1081: the server could not find the requested resource (get pods dns-test-7fc2db82-432c-4262-8796-d623822a1081) Jan 13 08:06:55.112: INFO: Lookups using dns-2699/dns-test-7fc2db82-432c-4262-8796-d623822a1081 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2699.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2699.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2699.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2699.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2699.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2699.svc.cluster.local jessie_udp@dns-test-service-2.dns-2699.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2699.svc.cluster.local] Jan 13 08:07:00.070: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2699.svc.cluster.local from pod dns-2699/dns-test-7fc2db82-432c-4262-8796-d623822a1081: the server could not find the requested resource (get pods dns-test-7fc2db82-432c-4262-8796-d623822a1081) Jan 13 08:07:00.075: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2699.svc.cluster.local from pod dns-2699/dns-test-7fc2db82-432c-4262-8796-d623822a1081: the server could not find the requested resource (get pods dns-test-7fc2db82-432c-4262-8796-d623822a1081) Jan 13 08:07:00.079: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2699.svc.cluster.local from pod dns-2699/dns-test-7fc2db82-432c-4262-8796-d623822a1081: the server could not find the requested resource (get pods dns-test-7fc2db82-432c-4262-8796-d623822a1081) Jan 13 08:07:00.083: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2699.svc.cluster.local from pod dns-2699/dns-test-7fc2db82-432c-4262-8796-d623822a1081: the server could not find the requested resource (get pods dns-test-7fc2db82-432c-4262-8796-d623822a1081) Jan 13 08:07:00.094: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2699.svc.cluster.local from pod dns-2699/dns-test-7fc2db82-432c-4262-8796-d623822a1081: the server could not find the requested resource (get pods dns-test-7fc2db82-432c-4262-8796-d623822a1081) Jan 13 08:07:00.097: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2699.svc.cluster.local from pod dns-2699/dns-test-7fc2db82-432c-4262-8796-d623822a1081: the server could not find the requested resource (get pods dns-test-7fc2db82-432c-4262-8796-d623822a1081) Jan 13 08:07:00.101: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2699.svc.cluster.local from pod dns-2699/dns-test-7fc2db82-432c-4262-8796-d623822a1081: the server could not find the requested resource (get pods dns-test-7fc2db82-432c-4262-8796-d623822a1081) Jan 13 08:07:00.106: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2699.svc.cluster.local from pod dns-2699/dns-test-7fc2db82-432c-4262-8796-d623822a1081: the server could not find the requested resource (get pods dns-test-7fc2db82-432c-4262-8796-d623822a1081) Jan 13 08:07:00.113: INFO: Lookups using dns-2699/dns-test-7fc2db82-432c-4262-8796-d623822a1081 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2699.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2699.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2699.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2699.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2699.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2699.svc.cluster.local jessie_udp@dns-test-service-2.dns-2699.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2699.svc.cluster.local] Jan 13 08:07:05.069: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2699.svc.cluster.local from pod dns-2699/dns-test-7fc2db82-432c-4262-8796-d623822a1081: the server could not find the requested resource (get pods dns-test-7fc2db82-432c-4262-8796-d623822a1081) Jan 13 08:07:05.075: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2699.svc.cluster.local from pod dns-2699/dns-test-7fc2db82-432c-4262-8796-d623822a1081: the server could not find the requested resource (get pods dns-test-7fc2db82-432c-4262-8796-d623822a1081) Jan 13 08:07:05.081: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2699.svc.cluster.local from pod dns-2699/dns-test-7fc2db82-432c-4262-8796-d623822a1081: the server could not find the requested resource (get pods dns-test-7fc2db82-432c-4262-8796-d623822a1081) Jan 13 08:07:05.085: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2699.svc.cluster.local from pod dns-2699/dns-test-7fc2db82-432c-4262-8796-d623822a1081: the server could not find the requested resource (get pods dns-test-7fc2db82-432c-4262-8796-d623822a1081) Jan 13 08:07:05.096: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2699.svc.cluster.local from pod dns-2699/dns-test-7fc2db82-432c-4262-8796-d623822a1081: the server could not find the requested resource (get pods dns-test-7fc2db82-432c-4262-8796-d623822a1081) Jan 13 08:07:05.100: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2699.svc.cluster.local from pod dns-2699/dns-test-7fc2db82-432c-4262-8796-d623822a1081: the server could not find the requested resource (get pods dns-test-7fc2db82-432c-4262-8796-d623822a1081) Jan 13 08:07:05.103: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2699.svc.cluster.local from pod dns-2699/dns-test-7fc2db82-432c-4262-8796-d623822a1081: the server could not find the requested resource (get pods dns-test-7fc2db82-432c-4262-8796-d623822a1081) Jan 13 08:07:05.107: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2699.svc.cluster.local from pod dns-2699/dns-test-7fc2db82-432c-4262-8796-d623822a1081: the server could not find the requested resource (get pods dns-test-7fc2db82-432c-4262-8796-d623822a1081) Jan 13 08:07:05.118: INFO: Lookups using dns-2699/dns-test-7fc2db82-432c-4262-8796-d623822a1081 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2699.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2699.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2699.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2699.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2699.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2699.svc.cluster.local jessie_udp@dns-test-service-2.dns-2699.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2699.svc.cluster.local] Jan 13 08:07:10.068: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2699.svc.cluster.local from pod dns-2699/dns-test-7fc2db82-432c-4262-8796-d623822a1081: the server could not find the requested resource (get pods dns-test-7fc2db82-432c-4262-8796-d623822a1081) Jan 13 08:07:10.073: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2699.svc.cluster.local from pod dns-2699/dns-test-7fc2db82-432c-4262-8796-d623822a1081: the server could not find the requested resource (get pods dns-test-7fc2db82-432c-4262-8796-d623822a1081) Jan 13 08:07:10.078: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2699.svc.cluster.local from pod dns-2699/dns-test-7fc2db82-432c-4262-8796-d623822a1081: the server could not find the requested resource (get pods dns-test-7fc2db82-432c-4262-8796-d623822a1081) Jan 13 08:07:10.083: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2699.svc.cluster.local from pod dns-2699/dns-test-7fc2db82-432c-4262-8796-d623822a1081: the server could not find the requested resource (get pods dns-test-7fc2db82-432c-4262-8796-d623822a1081) Jan 13 08:07:10.095: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2699.svc.cluster.local from pod dns-2699/dns-test-7fc2db82-432c-4262-8796-d623822a1081: the server could not find the requested resource (get pods dns-test-7fc2db82-432c-4262-8796-d623822a1081) Jan 13 08:07:10.099: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2699.svc.cluster.local from pod dns-2699/dns-test-7fc2db82-432c-4262-8796-d623822a1081: the server could not find the requested resource (get pods dns-test-7fc2db82-432c-4262-8796-d623822a1081) Jan 13 08:07:10.103: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2699.svc.cluster.local from pod dns-2699/dns-test-7fc2db82-432c-4262-8796-d623822a1081: the server could not find the requested resource (get pods dns-test-7fc2db82-432c-4262-8796-d623822a1081) Jan 13 08:07:10.107: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2699.svc.cluster.local from pod dns-2699/dns-test-7fc2db82-432c-4262-8796-d623822a1081: the server could not find the requested resource (get pods dns-test-7fc2db82-432c-4262-8796-d623822a1081) Jan 13 08:07:10.115: INFO: Lookups using dns-2699/dns-test-7fc2db82-432c-4262-8796-d623822a1081 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2699.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2699.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2699.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2699.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2699.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2699.svc.cluster.local jessie_udp@dns-test-service-2.dns-2699.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2699.svc.cluster.local] Jan 13 08:07:15.116: INFO: DNS probes using dns-2699/dns-test-7fc2db82-432c-4262-8796-d623822a1081 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:07:15.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2699" for this suite. • [SLOW TEST:38.887 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":309,"completed":238,"skipped":4192,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:07:15.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-6342 [It] Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-6342 STEP: Creating statefulset with conflicting port in namespace statefulset-6342 STEP: Waiting until pod test-pod will start running in namespace statefulset-6342 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-6342 Jan 13 08:07:22.100: INFO: Observed stateful pod in namespace: statefulset-6342, name: ss-0, uid: 323d5adc-4160-45fa-844a-3e6cb2b238a7, status phase: Pending. Waiting for statefulset controller to delete. Jan 13 08:07:22.445: INFO: Observed stateful pod in namespace: statefulset-6342, name: ss-0, uid: 323d5adc-4160-45fa-844a-3e6cb2b238a7, status phase: Failed. Waiting for statefulset controller to delete. Jan 13 08:07:22.473: INFO: Observed stateful pod in namespace: statefulset-6342, name: ss-0, uid: 323d5adc-4160-45fa-844a-3e6cb2b238a7, status phase: Failed. Waiting for statefulset controller to delete. Jan 13 08:07:22.505: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-6342 STEP: Removing pod with conflicting port in namespace statefulset-6342 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-6342 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jan 13 08:07:26.681: INFO: Deleting all statefulset in ns statefulset-6342 Jan 13 08:07:26.742: INFO: Scaling statefulset ss to 0 Jan 13 08:07:46.777: INFO: Waiting for statefulset status.replicas updated to 0 Jan 13 08:07:46.783: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:07:46.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6342" for this suite. • [SLOW TEST:31.139 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":309,"completed":239,"skipped":4242,"failed":0} SSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:07:46.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 Jan 13 08:07:46.916: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 13 08:07:46.950: INFO: Waiting for terminating namespaces to be deleted... Jan 13 08:07:46.956: INFO: Logging pods the apiserver thinks is on node leguer-worker before test Jan 13 08:07:46.969: INFO: rally-a8f48c6d-3kmika18-pdtzv from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 08:07:46.969: INFO: Container rally-a8f48c6d-3kmika18 ready: true, restart count 0 Jan 13 08:07:46.969: INFO: rally-a8f48c6d-3kmika18-pllzg from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 08:07:46.969: INFO: Container rally-a8f48c6d-3kmika18 ready: true, restart count 0 Jan 13 08:07:46.969: INFO: rally-a8f48c6d-4cyi45kq-j5tzz from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 08:07:46.969: INFO: Container rally-a8f48c6d-4cyi45kq ready: true, restart count 0 Jan 13 08:07:46.969: INFO: rally-a8f48c6d-f3hls6a3-57dwc from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Jan 13 08:07:46.969: INFO: Container rally-a8f48c6d-f3hls6a3 ready: true, restart count 0 Jan 13 08:07:46.969: INFO: rally-a8f48c6d-1y3amfc0-lp8st from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Jan 13 08:07:46.969: INFO: Container rally-a8f48c6d-1y3amfc0 ready: true, restart count 0 Jan 13 08:07:46.969: INFO: rally-a8f48c6d-9pqmjehi-9zwjj from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 08:07:46.969: INFO: Container rally-a8f48c6d-9pqmjehi ready: true, restart count 0 Jan 13 08:07:46.969: INFO: chaos-controller-manager-69c479c674-s796v from default started at 2021-01-10 20:58:24 +0000 UTC (1 container statuses recorded) Jan 13 08:07:46.969: INFO: Container chaos-mesh ready: true, restart count 0 Jan 13 08:07:46.969: INFO: chaos-daemon-lv692 from default started at 2021-01-10 20:58:25 +0000 UTC (1 container statuses recorded) Jan 13 08:07:46.969: INFO: Container chaos-daemon ready: true, restart count 0 Jan 13 08:07:46.970: INFO: kindnet-psm25 from kube-system started at 2021-01-10 17:38:10 +0000 UTC (1 container statuses recorded) Jan 13 08:07:46.970: INFO: Container kindnet-cni ready: true, restart count 0 Jan 13 08:07:46.970: INFO: kube-proxy-bmbcs from kube-system started at 2021-01-10 17:38:10 +0000 UTC (1 container statuses recorded) Jan 13 08:07:46.970: INFO: Container kube-proxy ready: true, restart count 0 Jan 13 08:07:46.970: INFO: Logging pods the apiserver thinks is on node leguer-worker2 before test Jan 13 08:07:46.986: INFO: rally-a8f48c6d-4cyi45kq-knr4r from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 08:07:46.986: INFO: Container rally-a8f48c6d-4cyi45kq ready: true, restart count 0 Jan 13 08:07:46.986: INFO: rally-a8f48c6d-f3hls6a3-dwt8n from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Jan 13 08:07:46.986: INFO: Container rally-a8f48c6d-f3hls6a3 ready: true, restart count 0 Jan 13 08:07:46.986: INFO: rally-a8f48c6d-1y3amfc0-hh9qk from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Jan 13 08:07:46.986: INFO: Container rally-a8f48c6d-1y3amfc0 ready: true, restart count 0 Jan 13 08:07:46.986: INFO: rally-a8f48c6d-9pqmjehi-85slb from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 08:07:46.986: INFO: Container rally-a8f48c6d-9pqmjehi ready: true, restart count 0 Jan 13 08:07:46.986: INFO: rally-a8f48c6d-vnukxqu0-llj24 from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 08:07:46.986: INFO: Container rally-a8f48c6d-vnukxqu0 ready: true, restart count 0 Jan 13 08:07:46.986: INFO: rally-a8f48c6d-vnukxqu0-v85kr from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 08:07:46.986: INFO: Container rally-a8f48c6d-vnukxqu0 ready: true, restart count 0 Jan 13 08:07:46.986: INFO: chaos-daemon-ffkg7 from default started at 2021-01-10 20:58:25 +0000 UTC (1 container statuses recorded) Jan 13 08:07:46.986: INFO: Container chaos-daemon ready: true, restart count 0 Jan 13 08:07:46.986: INFO: kindnet-8wggd from kube-system started at 2021-01-10 17:38:10 +0000 UTC (1 container statuses recorded) Jan 13 08:07:46.987: INFO: Container kindnet-cni ready: true, restart count 0 Jan 13 08:07:46.987: INFO: kube-proxy-29gxg from kube-system started at 2021-01-10 17:38:09 +0000 UTC (1 container statuses recorded) Jan 13 08:07:46.987: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.1659bc328b2a2ca5], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) didn't match Pod's node affinity.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:07:48.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1856" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":309,"completed":240,"skipped":4253,"failed":0} SSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount projected service account token [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:07:48.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount projected service account token [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test service account token: Jan 13 08:07:48.170: INFO: Waiting up to 5m0s for pod "test-pod-c03ccdfc-c506-4919-bd45-fb7c3fcc08e7" in namespace "svcaccounts-9932" to be "Succeeded or Failed" Jan 13 08:07:48.183: INFO: Pod "test-pod-c03ccdfc-c506-4919-bd45-fb7c3fcc08e7": Phase="Pending", Reason="", readiness=false. Elapsed: 12.968622ms Jan 13 08:07:50.192: INFO: Pod "test-pod-c03ccdfc-c506-4919-bd45-fb7c3fcc08e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02185774s Jan 13 08:07:52.198: INFO: Pod "test-pod-c03ccdfc-c506-4919-bd45-fb7c3fcc08e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027961224s STEP: Saw pod success Jan 13 08:07:52.198: INFO: Pod "test-pod-c03ccdfc-c506-4919-bd45-fb7c3fcc08e7" satisfied condition "Succeeded or Failed" Jan 13 08:07:52.203: INFO: Trying to get logs from node leguer-worker2 pod test-pod-c03ccdfc-c506-4919-bd45-fb7c3fcc08e7 container agnhost-container: STEP: delete the pod Jan 13 08:07:52.283: INFO: Waiting for pod test-pod-c03ccdfc-c506-4919-bd45-fb7c3fcc08e7 to disappear Jan 13 08:07:52.290: INFO: Pod test-pod-c03ccdfc-c506-4919-bd45-fb7c3fcc08e7 no longer exists [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:07:52.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9932" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":309,"completed":241,"skipped":4264,"failed":0} SS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:07:52.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 08:07:52.495: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-df314cba-d483-42db-a7bc-a8cf2db3f3c0" in namespace "security-context-test-2485" to be "Succeeded or Failed" Jan 13 08:07:52.544: INFO: Pod "busybox-privileged-false-df314cba-d483-42db-a7bc-a8cf2db3f3c0": Phase="Pending", Reason="", readiness=false. Elapsed: 49.053366ms Jan 13 08:07:54.552: INFO: Pod "busybox-privileged-false-df314cba-d483-42db-a7bc-a8cf2db3f3c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056959909s Jan 13 08:07:56.560: INFO: Pod "busybox-privileged-false-df314cba-d483-42db-a7bc-a8cf2db3f3c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065022541s Jan 13 08:07:56.560: INFO: Pod "busybox-privileged-false-df314cba-d483-42db-a7bc-a8cf2db3f3c0" satisfied condition "Succeeded or Failed" Jan 13 08:07:56.569: INFO: Got logs for pod "busybox-privileged-false-df314cba-d483-42db-a7bc-a8cf2db3f3c0": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:07:56.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2485" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":242,"skipped":4266,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:07:56.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Jan 13 08:07:56.748: INFO: >>> kubeConfig: /root/.kube/config Jan 13 08:08:19.344: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:09:39.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1098" for this suite. • [SLOW TEST:102.445 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":309,"completed":243,"skipped":4314,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:09:39.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6486.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6486.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 13 08:09:47.309: INFO: DNS probes using dns-6486/dns-test-9e904bd9-e625-415f-9ef8-5f9f0051c152 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:09:47.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6486" for this suite. • [SLOW TEST:8.477 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":309,"completed":244,"skipped":4345,"failed":0} SS ------------------------------ [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:09:47.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:09:48.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-9674" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":309,"completed":245,"skipped":4347,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:09:48.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-1569 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-1569 STEP: creating replication controller externalsvc in namespace services-1569 I0113 08:09:48.298496 10 runners.go:190] Created replication controller with name: externalsvc, namespace: services-1569, replica count: 2 I0113 08:09:51.349696 10 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0113 08:09:54.350352 10 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Jan 13 08:09:54.416: INFO: Creating new exec pod Jan 13 08:09:58.462: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-1569 exec execpod84b75 -- /bin/sh -x -c nslookup clusterip-service.services-1569.svc.cluster.local' Jan 13 08:10:00.062: INFO: stderr: "I0113 08:09:59.917359 3635 log.go:181] (0x4000232000) (0x4000b521e0) Create stream\nI0113 08:09:59.921086 3635 log.go:181] (0x4000232000) (0x4000b521e0) Stream added, broadcasting: 1\nI0113 08:09:59.929220 3635 log.go:181] (0x4000232000) Reply frame received for 1\nI0113 08:09:59.929703 3635 log.go:181] (0x4000232000) (0x400063a1e0) Create stream\nI0113 08:09:59.929755 3635 log.go:181] (0x4000232000) (0x400063a1e0) Stream added, broadcasting: 3\nI0113 08:09:59.931376 3635 log.go:181] (0x4000232000) Reply frame received for 3\nI0113 08:09:59.931909 3635 log.go:181] (0x4000232000) (0x4000b1dea0) Create stream\nI0113 08:09:59.932029 3635 log.go:181] (0x4000232000) (0x4000b1dea0) Stream added, broadcasting: 5\nI0113 08:09:59.933657 3635 log.go:181] (0x4000232000) Reply frame received for 5\nI0113 08:10:00.032685 3635 log.go:181] (0x4000232000) Data frame received for 5\nI0113 08:10:00.033067 3635 log.go:181] (0x4000b1dea0) (5) Data frame handling\nI0113 08:10:00.033899 3635 log.go:181] (0x4000b1dea0) (5) Data frame sent\n+ nslookup clusterip-service.services-1569.svc.cluster.local\nI0113 08:10:00.039948 3635 log.go:181] (0x4000232000) Data frame received for 3\nI0113 08:10:00.040162 3635 log.go:181] (0x400063a1e0) (3) Data frame handling\nI0113 08:10:00.040331 3635 log.go:181] (0x400063a1e0) (3) Data frame sent\nI0113 08:10:00.040958 3635 log.go:181] (0x4000232000) Data frame received for 3\nI0113 08:10:00.041060 3635 log.go:181] (0x400063a1e0) (3) Data frame handling\nI0113 08:10:00.041179 3635 log.go:181] (0x400063a1e0) (3) Data frame sent\nI0113 08:10:00.041567 3635 log.go:181] (0x4000232000) Data frame received for 5\nI0113 08:10:00.041726 3635 log.go:181] (0x4000b1dea0) (5) Data frame handling\nI0113 08:10:00.042018 3635 log.go:181] (0x4000232000) Data frame received for 3\nI0113 08:10:00.042174 3635 log.go:181] (0x400063a1e0) (3) Data frame handling\nI0113 08:10:00.044106 3635 log.go:181] (0x4000232000) Data frame received for 1\nI0113 08:10:00.044232 3635 log.go:181] (0x4000b521e0) (1) Data frame handling\nI0113 08:10:00.044374 3635 log.go:181] (0x4000b521e0) (1) Data frame sent\nI0113 08:10:00.046503 3635 log.go:181] (0x4000232000) (0x4000b521e0) Stream removed, broadcasting: 1\nI0113 08:10:00.049303 3635 log.go:181] (0x4000232000) Go away received\nI0113 08:10:00.053375 3635 log.go:181] (0x4000232000) (0x4000b521e0) Stream removed, broadcasting: 1\nI0113 08:10:00.053730 3635 log.go:181] (0x4000232000) (0x400063a1e0) Stream removed, broadcasting: 3\nI0113 08:10:00.053968 3635 log.go:181] (0x4000232000) (0x4000b1dea0) Stream removed, broadcasting: 5\n" Jan 13 08:10:00.064: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-1569.svc.cluster.local\tcanonical name = externalsvc.services-1569.svc.cluster.local.\nName:\texternalsvc.services-1569.svc.cluster.local\nAddress: 10.96.76.101\n\n" STEP: deleting ReplicationController externalsvc in namespace services-1569, will wait for the garbage collector to delete the pods Jan 13 08:10:00.131: INFO: Deleting ReplicationController externalsvc took: 7.814939ms Jan 13 08:10:00.731: INFO: Terminating ReplicationController externalsvc pods took: 600.724558ms Jan 13 08:10:10.179: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:10:10.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1569" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:22.203 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":309,"completed":246,"skipped":4367,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:10:10.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating secret with name secret-test-7d30ebb3-79e9-4398-b368-e7becb14b7c6 STEP: Creating a pod to test consume secrets Jan 13 08:10:10.380: INFO: Waiting up to 5m0s for pod "pod-secrets-6f821076-b23f-4901-a848-031729d18bfc" in namespace "secrets-7114" to be "Succeeded or Failed" Jan 13 08:10:10.411: INFO: Pod "pod-secrets-6f821076-b23f-4901-a848-031729d18bfc": Phase="Pending", Reason="", readiness=false. Elapsed: 31.11364ms Jan 13 08:10:12.418: INFO: Pod "pod-secrets-6f821076-b23f-4901-a848-031729d18bfc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038552321s Jan 13 08:10:14.425: INFO: Pod "pod-secrets-6f821076-b23f-4901-a848-031729d18bfc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045546472s STEP: Saw pod success Jan 13 08:10:14.426: INFO: Pod "pod-secrets-6f821076-b23f-4901-a848-031729d18bfc" satisfied condition "Succeeded or Failed" Jan 13 08:10:14.430: INFO: Trying to get logs from node leguer-worker pod pod-secrets-6f821076-b23f-4901-a848-031729d18bfc container secret-volume-test: STEP: delete the pod Jan 13 08:10:14.477: INFO: Waiting for pod pod-secrets-6f821076-b23f-4901-a848-031729d18bfc to disappear Jan 13 08:10:14.484: INFO: Pod pod-secrets-6f821076-b23f-4901-a848-031729d18bfc no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:10:14.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7114" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":309,"completed":247,"skipped":4387,"failed":0} ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:10:14.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:10:21.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5353" for this suite. • [SLOW TEST:7.220 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":309,"completed":248,"skipped":4387,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:10:21.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir 0666 on node default medium Jan 13 08:10:21.803: INFO: Waiting up to 5m0s for pod "pod-e711d7b4-f11e-42a9-af07-8be422b5efb3" in namespace "emptydir-9965" to be "Succeeded or Failed" Jan 13 08:10:21.821: INFO: Pod "pod-e711d7b4-f11e-42a9-af07-8be422b5efb3": Phase="Pending", Reason="", readiness=false. Elapsed: 17.173174ms Jan 13 08:10:23.828: INFO: Pod "pod-e711d7b4-f11e-42a9-af07-8be422b5efb3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024562434s Jan 13 08:10:25.835: INFO: Pod "pod-e711d7b4-f11e-42a9-af07-8be422b5efb3": Phase="Running", Reason="", readiness=true. Elapsed: 4.031209968s Jan 13 08:10:27.892: INFO: Pod "pod-e711d7b4-f11e-42a9-af07-8be422b5efb3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.088030636s STEP: Saw pod success Jan 13 08:10:27.892: INFO: Pod "pod-e711d7b4-f11e-42a9-af07-8be422b5efb3" satisfied condition "Succeeded or Failed" Jan 13 08:10:27.896: INFO: Trying to get logs from node leguer-worker2 pod pod-e711d7b4-f11e-42a9-af07-8be422b5efb3 container test-container: STEP: delete the pod Jan 13 08:10:27.958: INFO: Waiting for pod pod-e711d7b4-f11e-42a9-af07-8be422b5efb3 to disappear Jan 13 08:10:28.006: INFO: Pod pod-e711d7b4-f11e-42a9-af07-8be422b5efb3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:10:28.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9965" for this suite. • [SLOW TEST:6.302 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":249,"skipped":4438,"failed":0} [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:10:28.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:10:39.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7634" for this suite. • [SLOW TEST:11.189 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":309,"completed":250,"skipped":4438,"failed":0} SS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:10:39.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 08:10:39.423: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"8abefd1a-aba1-49bf-9413-6fe7b523dde1", Controller:(*bool)(0x4004344bba), BlockOwnerDeletion:(*bool)(0x4004344bbb)}} Jan 13 08:10:39.491: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"8dba60d8-075b-4d49-920a-bb978a1867ed", Controller:(*bool)(0x4004344efa), BlockOwnerDeletion:(*bool)(0x4004344efb)}} Jan 13 08:10:39.545: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"76a0ded1-3802-42fb-9122-bb595e1a1f30", Controller:(*bool)(0x40043787ea), BlockOwnerDeletion:(*bool)(0x40043787eb)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:10:44.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5359" for this suite. • [SLOW TEST:5.407 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":309,"completed":251,"skipped":4440,"failed":0} S ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:10:44.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:150 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:10:44.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-705" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":309,"completed":252,"skipped":4441,"failed":0} SSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:10:44.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Jan 13 08:10:50.984: INFO: &Pod{ObjectMeta:{send-events-53eac0b6-5328-4642-83c3-dddb7012eb76 events-6972 35a2d3ee-2af0-4190-8072-9e8a40ab0886 512467 0 2021-01-13 08:10:44 +0000 UTC map[name:foo time:923882287] map[] [] [] [{e2e.test Update v1 2021-01-13 08:10:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-01-13 08:10:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.182\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h2dbm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h2dbm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:p,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h2dbm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 08:10:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 08:10:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 08:10:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-01-13 08:10:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.1.182,StartTime:2021-01-13 08:10:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-01-13 08:10:48 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a,ContainerID:containerd://f14a21ec8db58cbbc0ff8c311f67654bdb74f8526735a9a62cc530495eeaae1d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.182,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Jan 13 08:10:52.997: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Jan 13 08:10:55.005: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:10:55.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-6972" for this suite. • [SLOW TEST:10.299 seconds] [k8s.io] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":309,"completed":253,"skipped":4448,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:10:55.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:11:01.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-6725" for this suite. STEP: Destroying namespace "nsdeletetest-8214" for this suite. Jan 13 08:11:01.443: INFO: Namespace nsdeletetest-8214 was already deleted STEP: Destroying namespace "nsdeletetest-1589" for this suite. • [SLOW TEST:6.337 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":309,"completed":254,"skipped":4470,"failed":0} SSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:11:01.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Jan 13 08:11:01.532: INFO: Waiting up to 1m0s for all nodes to be ready Jan 13 08:12:01.608: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:12:01.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:488 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. Jan 13 08:12:05.788: INFO: found a healthy node: leguer-worker [It] runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 08:12:26.310: INFO: pods created so far: [1 1 1] Jan 13 08:12:26.311: INFO: length of pods created so far: 3 Jan 13 08:12:56.325: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:13:03.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-3958" for this suite. [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:462 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:13:03.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-2421" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:122.072 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:451 runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":309,"completed":255,"skipped":4478,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:13:03.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir 0644 on tmpfs Jan 13 08:13:03.658: INFO: Waiting up to 5m0s for pod "pod-6ac0090d-b7d1-4d99-8d69-f3f0a2fe9278" in namespace "emptydir-3368" to be "Succeeded or Failed" Jan 13 08:13:03.681: INFO: Pod "pod-6ac0090d-b7d1-4d99-8d69-f3f0a2fe9278": Phase="Pending", Reason="", readiness=false. Elapsed: 22.784768ms Jan 13 08:13:05.690: INFO: Pod "pod-6ac0090d-b7d1-4d99-8d69-f3f0a2fe9278": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032485734s Jan 13 08:13:09.005: INFO: Pod "pod-6ac0090d-b7d1-4d99-8d69-f3f0a2fe9278": Phase="Succeeded", Reason="", readiness=false. Elapsed: 5.346781325s STEP: Saw pod success Jan 13 08:13:09.005: INFO: Pod "pod-6ac0090d-b7d1-4d99-8d69-f3f0a2fe9278" satisfied condition "Succeeded or Failed" Jan 13 08:13:09.012: INFO: Trying to get logs from node leguer-worker2 pod pod-6ac0090d-b7d1-4d99-8d69-f3f0a2fe9278 container test-container: STEP: delete the pod Jan 13 08:13:09.501: INFO: Waiting for pod pod-6ac0090d-b7d1-4d99-8d69-f3f0a2fe9278 to disappear Jan 13 08:13:09.536: INFO: Pod pod-6ac0090d-b7d1-4d99-8d69-f3f0a2fe9278 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:13:09.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3368" for this suite. • [SLOW TEST:6.130 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":256,"skipped":4485,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:13:09.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name configmap-test-volume-map-e5bbb3ad-9490-4793-9bfd-bb850f0f813c STEP: Creating a pod to test consume configMaps Jan 13 08:13:09.805: INFO: Waiting up to 5m0s for pod "pod-configmaps-dd7e33e2-853d-46e8-a9a7-c0531a38a96c" in namespace "configmap-6682" to be "Succeeded or Failed" Jan 13 08:13:09.843: INFO: Pod "pod-configmaps-dd7e33e2-853d-46e8-a9a7-c0531a38a96c": Phase="Pending", Reason="", readiness=false. Elapsed: 37.879702ms Jan 13 08:13:11.850: INFO: Pod "pod-configmaps-dd7e33e2-853d-46e8-a9a7-c0531a38a96c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044155981s Jan 13 08:13:13.865: INFO: Pod "pod-configmaps-dd7e33e2-853d-46e8-a9a7-c0531a38a96c": Phase="Running", Reason="", readiness=true. Elapsed: 4.059684038s Jan 13 08:13:15.872: INFO: Pod "pod-configmaps-dd7e33e2-853d-46e8-a9a7-c0531a38a96c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.06690403s STEP: Saw pod success Jan 13 08:13:15.873: INFO: Pod "pod-configmaps-dd7e33e2-853d-46e8-a9a7-c0531a38a96c" satisfied condition "Succeeded or Failed" Jan 13 08:13:15.878: INFO: Trying to get logs from node leguer-worker2 pod pod-configmaps-dd7e33e2-853d-46e8-a9a7-c0531a38a96c container agnhost-container: STEP: delete the pod Jan 13 08:13:15.925: INFO: Waiting for pod pod-configmaps-dd7e33e2-853d-46e8-a9a7-c0531a38a96c to disappear Jan 13 08:13:15.949: INFO: Pod pod-configmaps-dd7e33e2-853d-46e8-a9a7-c0531a38a96c no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:13:15.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6682" for this suite. • [SLOW TEST:6.312 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":309,"completed":257,"skipped":4496,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:13:15.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:13:16.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-474" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":309,"completed":258,"skipped":4517,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:13:16.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating the pod Jan 13 08:13:20.840: INFO: Successfully updated pod "labelsupdate97df887f-d3a7-4748-9f09-505ffa38cf92" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:13:24.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-186" for this suite. • [SLOW TEST:8.725 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":309,"completed":259,"skipped":4563,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:13:24.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir volume type on tmpfs Jan 13 08:13:25.027: INFO: Waiting up to 5m0s for pod "pod-75ef8093-b775-4537-91a0-8367cf1f3724" in namespace "emptydir-837" to be "Succeeded or Failed" Jan 13 08:13:25.049: INFO: Pod "pod-75ef8093-b775-4537-91a0-8367cf1f3724": Phase="Pending", Reason="", readiness=false. Elapsed: 21.86205ms Jan 13 08:13:27.057: INFO: Pod "pod-75ef8093-b775-4537-91a0-8367cf1f3724": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02930453s Jan 13 08:13:29.065: INFO: Pod "pod-75ef8093-b775-4537-91a0-8367cf1f3724": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037180431s STEP: Saw pod success Jan 13 08:13:29.065: INFO: Pod "pod-75ef8093-b775-4537-91a0-8367cf1f3724" satisfied condition "Succeeded or Failed" Jan 13 08:13:29.070: INFO: Trying to get logs from node leguer-worker pod pod-75ef8093-b775-4537-91a0-8367cf1f3724 container test-container: STEP: delete the pod Jan 13 08:13:29.131: INFO: Waiting for pod pod-75ef8093-b775-4537-91a0-8367cf1f3724 to disappear Jan 13 08:13:29.169: INFO: Pod pod-75ef8093-b775-4537-91a0-8367cf1f3724 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:13:29.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-837" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":260,"skipped":4589,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:13:29.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if Kubernetes control plane services is included in cluster-info [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: validating cluster-info Jan 13 08:13:29.259: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-7523 cluster-info' Jan 13 08:13:30.569: INFO: stderr: "" Jan 13 08:13:30.570: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:34747\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:13:30.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7523" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]","total":309,"completed":261,"skipped":4616,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:13:30.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check is all data is printed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 08:13:30.689: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-9855 version' Jan 13 08:13:31.931: INFO: stderr: "" Jan 13 08:13:31.931: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"20\", GitVersion:\"v1.20.1\", GitCommit:\"c4d752765b3bbac2237bf87cf0b1c2e307844666\", GitTreeState:\"clean\", BuildDate:\"2020-12-18T12:09:25Z\", GoVersion:\"go1.15.5\", Compiler:\"gc\", Platform:\"linux/arm64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"20\", GitVersion:\"v1.20.0\", GitCommit:\"af46c47ce925f4c4ad5cc8d1fca46c7b77d13b38\", GitTreeState:\"clean\", BuildDate:\"2020-12-08T22:31:47Z\", GoVersion:\"go1.15.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:13:31.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9855" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":309,"completed":262,"skipped":4630,"failed":0} SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:13:31.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:13:40.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9825" for this suite. • [SLOW TEST:8.183 seconds] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 when scheduling a busybox Pod with hostAliases /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:137 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":263,"skipped":4632,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:13:40.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating the pod with failed condition STEP: updating the pod Jan 13 08:15:40.824: INFO: Successfully updated pod "var-expansion-b38c0498-20f3-4d95-8d4a-133366dedc2b" STEP: waiting for pod running STEP: deleting the pod gracefully Jan 13 08:15:42.878: INFO: Deleting pod "var-expansion-b38c0498-20f3-4d95-8d4a-133366dedc2b" in namespace "var-expansion-5563" Jan 13 08:15:42.885: INFO: Wait up to 5m0s for pod "var-expansion-b38c0498-20f3-4d95-8d4a-133366dedc2b" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:16:40.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5563" for this suite. • [SLOW TEST:180.791 seconds] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":309,"completed":264,"skipped":4661,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:16:40.929: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating Agnhost RC Jan 13 08:16:41.031: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-6657 create -f -' Jan 13 08:16:47.349: INFO: stderr: "" Jan 13 08:16:47.349: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Jan 13 08:16:48.358: INFO: Selector matched 1 pods for map[app:agnhost] Jan 13 08:16:48.359: INFO: Found 0 / 1 Jan 13 08:16:49.357: INFO: Selector matched 1 pods for map[app:agnhost] Jan 13 08:16:49.357: INFO: Found 0 / 1 Jan 13 08:16:50.367: INFO: Selector matched 1 pods for map[app:agnhost] Jan 13 08:16:50.367: INFO: Found 0 / 1 Jan 13 08:16:51.359: INFO: Selector matched 1 pods for map[app:agnhost] Jan 13 08:16:51.359: INFO: Found 1 / 1 Jan 13 08:16:51.359: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Jan 13 08:16:51.365: INFO: Selector matched 1 pods for map[app:agnhost] Jan 13 08:16:51.365: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 13 08:16:51.366: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-6657 patch pod agnhost-primary-xnszk -p {"metadata":{"annotations":{"x":"y"}}}' Jan 13 08:16:52.686: INFO: stderr: "" Jan 13 08:16:52.686: INFO: stdout: "pod/agnhost-primary-xnszk patched\n" STEP: checking annotations Jan 13 08:16:52.692: INFO: Selector matched 1 pods for map[app:agnhost] Jan 13 08:16:52.692: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:16:52.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6657" for this suite. • [SLOW TEST:11.794 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1466 should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":309,"completed":265,"skipped":4670,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:16:52.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating projection with secret that has name projected-secret-test-3fa0810d-20d1-4d46-89e7-12271672292a STEP: Creating a pod to test consume secrets Jan 13 08:16:52.921: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e1a21844-c075-4381-955a-46f38d4b31ec" in namespace "projected-2913" to be "Succeeded or Failed" Jan 13 08:16:52.957: INFO: Pod "pod-projected-secrets-e1a21844-c075-4381-955a-46f38d4b31ec": Phase="Pending", Reason="", readiness=false. Elapsed: 36.744149ms Jan 13 08:16:54.964: INFO: Pod "pod-projected-secrets-e1a21844-c075-4381-955a-46f38d4b31ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043355303s Jan 13 08:16:56.972: INFO: Pod "pod-projected-secrets-e1a21844-c075-4381-955a-46f38d4b31ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050825293s STEP: Saw pod success Jan 13 08:16:56.972: INFO: Pod "pod-projected-secrets-e1a21844-c075-4381-955a-46f38d4b31ec" satisfied condition "Succeeded or Failed" Jan 13 08:16:56.977: INFO: Trying to get logs from node leguer-worker pod pod-projected-secrets-e1a21844-c075-4381-955a-46f38d4b31ec container projected-secret-volume-test: STEP: delete the pod Jan 13 08:16:57.102: INFO: Waiting for pod pod-projected-secrets-e1a21844-c075-4381-955a-46f38d4b31ec to disappear Jan 13 08:16:57.109: INFO: Pod pod-projected-secrets-e1a21844-c075-4381-955a-46f38d4b31ec no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:16:57.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2913" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":266,"skipped":4688,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:16:57.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap configmap-4487/configmap-test-17731c71-1fae-4df6-8c06-645f6dd1bbcf STEP: Creating a pod to test consume configMaps Jan 13 08:16:57.233: INFO: Waiting up to 5m0s for pod "pod-configmaps-d74a7090-8879-4f0b-861b-0f42c947f33f" in namespace "configmap-4487" to be "Succeeded or Failed" Jan 13 08:16:57.248: INFO: Pod "pod-configmaps-d74a7090-8879-4f0b-861b-0f42c947f33f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.464974ms Jan 13 08:16:59.358: INFO: Pod "pod-configmaps-d74a7090-8879-4f0b-861b-0f42c947f33f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124261352s Jan 13 08:17:01.370: INFO: Pod "pod-configmaps-d74a7090-8879-4f0b-861b-0f42c947f33f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.136981624s Jan 13 08:17:03.379: INFO: Pod "pod-configmaps-d74a7090-8879-4f0b-861b-0f42c947f33f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.145547871s STEP: Saw pod success Jan 13 08:17:03.379: INFO: Pod "pod-configmaps-d74a7090-8879-4f0b-861b-0f42c947f33f" satisfied condition "Succeeded or Failed" Jan 13 08:17:03.386: INFO: Trying to get logs from node leguer-worker pod pod-configmaps-d74a7090-8879-4f0b-861b-0f42c947f33f container env-test: STEP: delete the pod Jan 13 08:17:03.504: INFO: Waiting for pod pod-configmaps-d74a7090-8879-4f0b-861b-0f42c947f33f to disappear Jan 13 08:17:03.528: INFO: Pod pod-configmaps-d74a7090-8879-4f0b-861b-0f42c947f33f no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:17:03.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4487" for this suite. • [SLOW TEST:6.416 seconds] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34 should be consumable via environment variable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":309,"completed":267,"skipped":4711,"failed":0} SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:17:03.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating secret with name secret-test-cb732809-08eb-46f6-b288-fc18437f25e2 STEP: Creating a pod to test consume secrets Jan 13 08:17:03.705: INFO: Waiting up to 5m0s for pod "pod-secrets-b3c0aaf3-d3ba-46d9-b27b-28ec5a2c58db" in namespace "secrets-2011" to be "Succeeded or Failed" Jan 13 08:17:03.726: INFO: Pod "pod-secrets-b3c0aaf3-d3ba-46d9-b27b-28ec5a2c58db": Phase="Pending", Reason="", readiness=false. Elapsed: 20.60482ms Jan 13 08:17:05.734: INFO: Pod "pod-secrets-b3c0aaf3-d3ba-46d9-b27b-28ec5a2c58db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028300987s Jan 13 08:17:07.744: INFO: Pod "pod-secrets-b3c0aaf3-d3ba-46d9-b27b-28ec5a2c58db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038933744s STEP: Saw pod success Jan 13 08:17:07.744: INFO: Pod "pod-secrets-b3c0aaf3-d3ba-46d9-b27b-28ec5a2c58db" satisfied condition "Succeeded or Failed" Jan 13 08:17:07.748: INFO: Trying to get logs from node leguer-worker pod pod-secrets-b3c0aaf3-d3ba-46d9-b27b-28ec5a2c58db container secret-volume-test: STEP: delete the pod Jan 13 08:17:07.782: INFO: Waiting for pod pod-secrets-b3c0aaf3-d3ba-46d9-b27b-28ec5a2c58db to disappear Jan 13 08:17:07.812: INFO: Pod pod-secrets-b3c0aaf3-d3ba-46d9-b27b-28ec5a2c58db no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:17:07.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2011" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":268,"skipped":4716,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:17:07.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test override all Jan 13 08:17:07.908: INFO: Waiting up to 5m0s for pod "client-containers-5a64f661-a871-42db-9776-6bfc2e239cb9" in namespace "containers-7854" to be "Succeeded or Failed" Jan 13 08:17:07.944: INFO: Pod "client-containers-5a64f661-a871-42db-9776-6bfc2e239cb9": Phase="Pending", Reason="", readiness=false. Elapsed: 35.586538ms Jan 13 08:17:09.951: INFO: Pod "client-containers-5a64f661-a871-42db-9776-6bfc2e239cb9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041926044s Jan 13 08:17:11.960: INFO: Pod "client-containers-5a64f661-a871-42db-9776-6bfc2e239cb9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051812959s STEP: Saw pod success Jan 13 08:17:11.961: INFO: Pod "client-containers-5a64f661-a871-42db-9776-6bfc2e239cb9" satisfied condition "Succeeded or Failed" Jan 13 08:17:11.965: INFO: Trying to get logs from node leguer-worker pod client-containers-5a64f661-a871-42db-9776-6bfc2e239cb9 container agnhost-container: STEP: delete the pod Jan 13 08:17:11.987: INFO: Waiting for pod client-containers-5a64f661-a871-42db-9776-6bfc2e239cb9 to disappear Jan 13 08:17:12.118: INFO: Pod client-containers-5a64f661-a871-42db-9776-6bfc2e239cb9 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:17:12.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7854" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":309,"completed":269,"skipped":4727,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:17:12.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 08:17:12.270: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:17:16.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5483" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":309,"completed":270,"skipped":4741,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:17:16.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:17:20.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2416" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":271,"skipped":4771,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:17:20.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 08:17:20.973: INFO: Waiting up to 5m0s for pod "busybox-user-65534-7a442473-ea13-481f-a19d-4de6a73229b8" in namespace "security-context-test-8108" to be "Succeeded or Failed" Jan 13 08:17:20.996: INFO: Pod "busybox-user-65534-7a442473-ea13-481f-a19d-4de6a73229b8": Phase="Pending", Reason="", readiness=false. Elapsed: 22.36943ms Jan 13 08:17:23.003: INFO: Pod "busybox-user-65534-7a442473-ea13-481f-a19d-4de6a73229b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029351792s Jan 13 08:17:25.011: INFO: Pod "busybox-user-65534-7a442473-ea13-481f-a19d-4de6a73229b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037092494s Jan 13 08:17:25.011: INFO: Pod "busybox-user-65534-7a442473-ea13-481f-a19d-4de6a73229b8" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:17:25.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8108" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":272,"skipped":4811,"failed":0} SSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:17:25.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 Jan 13 08:17:26.285: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 13 08:17:26.370: INFO: Waiting for terminating namespaces to be deleted... Jan 13 08:17:26.402: INFO: Logging pods the apiserver thinks is on node leguer-worker before test Jan 13 08:17:26.415: INFO: rally-a8f48c6d-3kmika18-pdtzv from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 08:17:26.415: INFO: Container rally-a8f48c6d-3kmika18 ready: true, restart count 0 Jan 13 08:17:26.415: INFO: rally-a8f48c6d-3kmika18-pllzg from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 08:17:26.415: INFO: Container rally-a8f48c6d-3kmika18 ready: true, restart count 0 Jan 13 08:17:26.415: INFO: rally-a8f48c6d-4cyi45kq-j5tzz from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 08:17:26.415: INFO: Container rally-a8f48c6d-4cyi45kq ready: true, restart count 0 Jan 13 08:17:26.415: INFO: rally-a8f48c6d-f3hls6a3-57dwc from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Jan 13 08:17:26.415: INFO: Container rally-a8f48c6d-f3hls6a3 ready: true, restart count 0 Jan 13 08:17:26.415: INFO: rally-a8f48c6d-1y3amfc0-lp8st from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Jan 13 08:17:26.416: INFO: Container rally-a8f48c6d-1y3amfc0 ready: true, restart count 0 Jan 13 08:17:26.416: INFO: rally-a8f48c6d-9pqmjehi-9zwjj from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 08:17:26.416: INFO: Container rally-a8f48c6d-9pqmjehi ready: true, restart count 0 Jan 13 08:17:26.416: INFO: chaos-controller-manager-69c479c674-s796v from default started at 2021-01-10 20:58:24 +0000 UTC (1 container statuses recorded) Jan 13 08:17:26.416: INFO: Container chaos-mesh ready: true, restart count 0 Jan 13 08:17:26.416: INFO: chaos-daemon-lv692 from default started at 2021-01-10 20:58:25 +0000 UTC (1 container statuses recorded) Jan 13 08:17:26.416: INFO: Container chaos-daemon ready: true, restart count 0 Jan 13 08:17:26.416: INFO: kindnet-psm25 from kube-system started at 2021-01-10 17:38:10 +0000 UTC (1 container statuses recorded) Jan 13 08:17:26.416: INFO: Container kindnet-cni ready: true, restart count 0 Jan 13 08:17:26.416: INFO: kube-proxy-bmbcs from kube-system started at 2021-01-10 17:38:10 +0000 UTC (1 container statuses recorded) Jan 13 08:17:26.416: INFO: Container kube-proxy ready: true, restart count 0 Jan 13 08:17:26.416: INFO: pod-exec-websocket-ed791a57-84d9-42f6-b65a-94dfbe10ed32 from pods-5483 started at 2021-01-13 08:17:12 +0000 UTC (1 container statuses recorded) Jan 13 08:17:26.416: INFO: Container main ready: true, restart count 0 Jan 13 08:17:26.416: INFO: busybox-user-65534-7a442473-ea13-481f-a19d-4de6a73229b8 from security-context-test-8108 started at 2021-01-13 08:17:20 +0000 UTC (1 container statuses recorded) Jan 13 08:17:26.416: INFO: Container busybox-user-65534-7a442473-ea13-481f-a19d-4de6a73229b8 ready: false, restart count 0 Jan 13 08:17:26.416: INFO: Logging pods the apiserver thinks is on node leguer-worker2 before test Jan 13 08:17:26.450: INFO: rally-a8f48c6d-4cyi45kq-knr4r from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 08:17:26.450: INFO: Container rally-a8f48c6d-4cyi45kq ready: true, restart count 0 Jan 13 08:17:26.450: INFO: rally-a8f48c6d-f3hls6a3-dwt8n from c-rally-a8f48c6d-bmjklszo started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Jan 13 08:17:26.450: INFO: Container rally-a8f48c6d-f3hls6a3 ready: true, restart count 0 Jan 13 08:17:26.450: INFO: rally-a8f48c6d-1y3amfc0-hh9qk from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:32 +0000 UTC (1 container statuses recorded) Jan 13 08:17:26.450: INFO: Container rally-a8f48c6d-1y3amfc0 ready: true, restart count 0 Jan 13 08:17:26.451: INFO: rally-a8f48c6d-9pqmjehi-85slb from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 08:17:26.451: INFO: Container rally-a8f48c6d-9pqmjehi ready: true, restart count 0 Jan 13 08:17:26.451: INFO: rally-a8f48c6d-vnukxqu0-llj24 from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 08:17:26.451: INFO: Container rally-a8f48c6d-vnukxqu0 ready: true, restart count 0 Jan 13 08:17:26.451: INFO: rally-a8f48c6d-vnukxqu0-v85kr from c-rally-a8f48c6d-lypjtwol started at 2021-01-10 20:04:23 +0000 UTC (1 container statuses recorded) Jan 13 08:17:26.451: INFO: Container rally-a8f48c6d-vnukxqu0 ready: true, restart count 0 Jan 13 08:17:26.451: INFO: chaos-daemon-ffkg7 from default started at 2021-01-10 20:58:25 +0000 UTC (1 container statuses recorded) Jan 13 08:17:26.451: INFO: Container chaos-daemon ready: true, restart count 0 Jan 13 08:17:26.451: INFO: kindnet-8wggd from kube-system started at 2021-01-10 17:38:10 +0000 UTC (1 container statuses recorded) Jan 13 08:17:26.451: INFO: Container kindnet-cni ready: true, restart count 0 Jan 13 08:17:26.451: INFO: kube-proxy-29gxg from kube-system started at 2021-01-10 17:38:09 +0000 UTC (1 container statuses recorded) Jan 13 08:17:26.451: INFO: Container kube-proxy ready: true, restart count 0 Jan 13 08:17:26.451: INFO: agnhost-primary-xnszk from kubectl-6657 started at 2021-01-13 08:16:47 +0000 UTC (1 container statuses recorded) Jan 13 08:17:26.451: INFO: Container agnhost-primary ready: false, restart count 0 Jan 13 08:17:26.451: INFO: busybox-readonly-fsfd9d0661-087b-4fa8-a2e1-fa307fd3dc27 from kubelet-test-2416 started at 2021-01-13 08:17:16 +0000 UTC (1 container statuses recorded) Jan 13 08:17:26.451: INFO: Container busybox-readonly-fsfd9d0661-087b-4fa8-a2e1-fa307fd3dc27 ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-e7ac5ca8-5922-48d5-a2c8-4ba44ce9f42c 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 172.18.0.13 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 172.18.0.13 but use UDP protocol on the node which pod2 resides STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54321 Jan 13 08:17:46.845: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.18.0.13 http://127.0.0.1:54321/hostname] Namespace:sched-pred-428 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 08:17:46.846: INFO: >>> kubeConfig: /root/.kube/config I0113 08:17:46.914989 10 log.go:181] (0x4003cf6420) (0x4007dd46e0) Create stream I0113 08:17:46.915186 10 log.go:181] (0x4003cf6420) (0x4007dd46e0) Stream added, broadcasting: 1 I0113 08:17:46.921814 10 log.go:181] (0x4003cf6420) Reply frame received for 1 I0113 08:17:46.922022 10 log.go:181] (0x4003cf6420) (0x40011e2280) Create stream I0113 08:17:46.922110 10 log.go:181] (0x4003cf6420) (0x40011e2280) Stream added, broadcasting: 3 I0113 08:17:46.923763 10 log.go:181] (0x4003cf6420) Reply frame received for 3 I0113 08:17:46.923969 10 log.go:181] (0x4003cf6420) (0x4007dd4780) Create stream I0113 08:17:46.924065 10 log.go:181] (0x4003cf6420) (0x4007dd4780) Stream added, broadcasting: 5 I0113 08:17:46.925461 10 log.go:181] (0x4003cf6420) Reply frame received for 5 I0113 08:17:47.030233 10 log.go:181] (0x4003cf6420) Data frame received for 5 I0113 08:17:47.030477 10 log.go:181] (0x4007dd4780) (5) Data frame handling I0113 08:17:47.030662 10 log.go:181] (0x4007dd4780) (5) Data frame sent I0113 08:17:47.030809 10 log.go:181] (0x4003cf6420) Data frame received for 5 I0113 08:17:47.030916 10 log.go:181] (0x4007dd4780) (5) Data frame handling I0113 08:17:47.031046 10 log.go:181] (0x4007dd4780) (5) Data frame sent I0113 08:17:47.031145 10 log.go:181] (0x4003cf6420) Data frame received for 5 I0113 08:17:47.031235 10 log.go:181] (0x4007dd4780) (5) Data frame handling I0113 08:17:47.031392 10 log.go:181] (0x4007dd4780) (5) Data frame sent I0113 08:17:47.032518 10 log.go:181] (0x4003cf6420) Data frame received for 5 I0113 08:17:47.032803 10 log.go:181] (0x4007dd4780) (5) Data frame handling I0113 08:17:47.033166 10 log.go:181] (0x4003cf6420) Data frame received for 3 I0113 08:17:47.033342 10 log.go:181] (0x4003cf6420) Data frame received for 1 I0113 08:17:47.033486 10 log.go:181] (0x4007dd46e0) (1) Data frame handling I0113 08:17:47.033646 10 log.go:181] (0x40011e2280) (3) Data frame handling I0113 08:17:47.033939 10 log.go:181] (0x4007dd46e0) (1) Data frame sent I0113 08:17:47.034157 10 log.go:181] (0x4007dd4780) (5) Data frame sent I0113 08:17:47.034411 10 log.go:181] (0x4003cf6420) (0x4007dd46e0) Stream removed, broadcasting: 1 I0113 08:17:47.034561 10 log.go:181] (0x4003cf6420) Data frame received for 5 I0113 08:17:47.034660 10 log.go:181] (0x40011e2280) (3) Data frame sent I0113 08:17:47.034791 10 log.go:181] (0x4003cf6420) Data frame received for 3 I0113 08:17:47.034891 10 log.go:181] (0x4007dd4780) (5) Data frame handling I0113 08:17:47.035000 10 log.go:181] (0x4007dd4780) (5) Data frame sent I0113 08:17:47.035084 10 log.go:181] (0x4003cf6420) Data frame received for 5 I0113 08:17:47.035158 10 log.go:181] (0x4007dd4780) (5) Data frame handling I0113 08:17:47.035265 10 log.go:181] (0x40011e2280) (3) Data frame handling I0113 08:17:47.035382 10 log.go:181] (0x4007dd4780) (5) Data frame sent I0113 08:17:47.035466 10 log.go:181] (0x4003cf6420) Data frame received for 5 I0113 08:17:47.035574 10 log.go:181] (0x4007dd4780) (5) Data frame handling I0113 08:17:47.035697 10 log.go:181] (0x4003cf6420) Go away received I0113 08:17:47.035861 10 log.go:181] (0x4003cf6420) (0x4007dd46e0) Stream removed, broadcasting: 1 I0113 08:17:47.036005 10 log.go:181] (0x4003cf6420) (0x40011e2280) Stream removed, broadcasting: 3 I0113 08:17:47.036144 10 log.go:181] (0x4003cf6420) (0x4007dd4780) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.13, port: 54321 Jan 13 08:17:47.036: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://172.18.0.13:54321/hostname] Namespace:sched-pred-428 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 08:17:47.037: INFO: >>> kubeConfig: /root/.kube/config I0113 08:17:47.092607 10 log.go:181] (0x40045e0420) (0x400283a280) Create stream I0113 08:17:47.092804 10 log.go:181] (0x40045e0420) (0x400283a280) Stream added, broadcasting: 1 I0113 08:17:47.096592 10 log.go:181] (0x40045e0420) Reply frame received for 1 I0113 08:17:47.096790 10 log.go:181] (0x40045e0420) (0x4007dd48c0) Create stream I0113 08:17:47.097028 10 log.go:181] (0x40045e0420) (0x4007dd48c0) Stream added, broadcasting: 3 I0113 08:17:47.098777 10 log.go:181] (0x40045e0420) Reply frame received for 3 I0113 08:17:47.098904 10 log.go:181] (0x40045e0420) (0x400283a320) Create stream I0113 08:17:47.098972 10 log.go:181] (0x40045e0420) (0x400283a320) Stream added, broadcasting: 5 I0113 08:17:47.100528 10 log.go:181] (0x40045e0420) Reply frame received for 5 I0113 08:17:47.149689 10 log.go:181] (0x40045e0420) Data frame received for 5 I0113 08:17:47.149828 10 log.go:181] (0x400283a320) (5) Data frame handling I0113 08:17:47.149913 10 log.go:181] (0x400283a320) (5) Data frame sent I0113 08:17:47.150026 10 log.go:181] (0x40045e0420) Data frame received for 5 I0113 08:17:47.150095 10 log.go:181] (0x400283a320) (5) Data frame handling I0113 08:17:47.150190 10 log.go:181] (0x400283a320) (5) Data frame sent I0113 08:17:47.150265 10 log.go:181] (0x40045e0420) Data frame received for 5 I0113 08:17:47.150365 10 log.go:181] (0x400283a320) (5) Data frame handling I0113 08:17:47.150547 10 log.go:181] (0x40045e0420) Data frame received for 3 I0113 08:17:47.150767 10 log.go:181] (0x4007dd48c0) (3) Data frame handling I0113 08:17:47.150909 10 log.go:181] (0x4007dd48c0) (3) Data frame sent I0113 08:17:47.151025 10 log.go:181] (0x40045e0420) Data frame received for 3 I0113 08:17:47.151177 10 log.go:181] (0x4007dd48c0) (3) Data frame handling I0113 08:17:47.151291 10 log.go:181] (0x400283a320) (5) Data frame sent I0113 08:17:47.151364 10 log.go:181] (0x40045e0420) Data frame received for 5 I0113 08:17:47.151416 10 log.go:181] (0x400283a320) (5) Data frame handling I0113 08:17:47.151482 10 log.go:181] (0x400283a320) (5) Data frame sent I0113 08:17:47.151542 10 log.go:181] (0x40045e0420) Data frame received for 5 I0113 08:17:47.151594 10 log.go:181] (0x400283a320) (5) Data frame handling I0113 08:17:47.151650 10 log.go:181] (0x400283a320) (5) Data frame sent I0113 08:17:47.151705 10 log.go:181] (0x40045e0420) Data frame received for 5 I0113 08:17:47.151754 10 log.go:181] (0x400283a320) (5) Data frame handling I0113 08:17:47.151811 10 log.go:181] (0x400283a320) (5) Data frame sent I0113 08:17:47.151865 10 log.go:181] (0x40045e0420) Data frame received for 5 I0113 08:17:47.151917 10 log.go:181] (0x400283a320) (5) Data frame handling I0113 08:17:47.151972 10 log.go:181] (0x400283a320) (5) Data frame sent I0113 08:17:47.152025 10 log.go:181] (0x40045e0420) Data frame received for 5 I0113 08:17:47.152087 10 log.go:181] (0x400283a320) (5) Data frame handling I0113 08:17:47.152152 10 log.go:181] (0x400283a320) (5) Data frame sent I0113 08:17:47.152209 10 log.go:181] (0x40045e0420) Data frame received for 5 I0113 08:17:47.152259 10 log.go:181] (0x400283a320) (5) Data frame handling I0113 08:17:47.152319 10 log.go:181] (0x400283a320) (5) Data frame sent I0113 08:17:47.152385 10 log.go:181] (0x40045e0420) Data frame received for 5 I0113 08:17:47.152460 10 log.go:181] (0x400283a320) (5) Data frame handling I0113 08:17:47.152547 10 log.go:181] (0x400283a320) (5) Data frame sent I0113 08:17:47.152630 10 log.go:181] (0x40045e0420) Data frame received for 5 I0113 08:17:47.152695 10 log.go:181] (0x400283a320) (5) Data frame handling I0113 08:17:47.152768 10 log.go:181] (0x400283a320) (5) Data frame sent I0113 08:17:47.153100 10 log.go:181] (0x40045e0420) Data frame received for 1 I0113 08:17:47.153348 10 log.go:181] (0x400283a280) (1) Data frame handling I0113 08:17:47.153524 10 log.go:181] (0x40045e0420) Data frame received for 5 I0113 08:17:47.153706 10 log.go:181] (0x400283a320) (5) Data frame handling I0113 08:17:47.153927 10 log.go:181] (0x400283a280) (1) Data frame sent I0113 08:17:47.154141 10 log.go:181] (0x40045e0420) (0x400283a280) Stream removed, broadcasting: 1 I0113 08:17:47.154353 10 log.go:181] (0x40045e0420) Go away received I0113 08:17:47.154786 10 log.go:181] (0x40045e0420) (0x400283a280) Stream removed, broadcasting: 1 I0113 08:17:47.154946 10 log.go:181] (0x40045e0420) (0x4007dd48c0) Stream removed, broadcasting: 3 I0113 08:17:47.155066 10 log.go:181] (0x40045e0420) (0x400283a320) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.13, port: 54321 UDP Jan 13 08:17:47.155: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 172.18.0.13 54321] Namespace:sched-pred-428 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 08:17:47.155: INFO: >>> kubeConfig: /root/.kube/config I0113 08:17:47.223245 10 log.go:181] (0x4003cf6bb0) (0x4007dd4b40) Create stream I0113 08:17:47.223445 10 log.go:181] (0x4003cf6bb0) (0x4007dd4b40) Stream added, broadcasting: 1 I0113 08:17:47.228601 10 log.go:181] (0x4003cf6bb0) Reply frame received for 1 I0113 08:17:47.228825 10 log.go:181] (0x4003cf6bb0) (0x400283a3c0) Create stream I0113 08:17:47.229248 10 log.go:181] (0x4003cf6bb0) (0x400283a3c0) Stream added, broadcasting: 3 I0113 08:17:47.231203 10 log.go:181] (0x4003cf6bb0) Reply frame received for 3 I0113 08:17:47.231420 10 log.go:181] (0x4003cf6bb0) (0x4007dd4be0) Create stream I0113 08:17:47.231536 10 log.go:181] (0x4003cf6bb0) (0x4007dd4be0) Stream added, broadcasting: 5 I0113 08:17:47.233585 10 log.go:181] (0x4003cf6bb0) Reply frame received for 5 I0113 08:17:52.316280 10 log.go:181] (0x4003cf6bb0) Data frame received for 5 I0113 08:17:52.316562 10 log.go:181] (0x4007dd4be0) (5) Data frame handling I0113 08:17:52.316678 10 log.go:181] (0x4003cf6bb0) Data frame received for 3 I0113 08:17:52.316896 10 log.go:181] (0x400283a3c0) (3) Data frame handling I0113 08:17:52.317119 10 log.go:181] (0x4007dd4be0) (5) Data frame sent I0113 08:17:52.317328 10 log.go:181] (0x4003cf6bb0) Data frame received for 5 I0113 08:17:52.317467 10 log.go:181] (0x4007dd4be0) (5) Data frame handling I0113 08:17:52.319121 10 log.go:181] (0x4003cf6bb0) Data frame received for 1 I0113 08:17:52.319300 10 log.go:181] (0x4007dd4b40) (1) Data frame handling I0113 08:17:52.319459 10 log.go:181] (0x4007dd4b40) (1) Data frame sent I0113 08:17:52.319590 10 log.go:181] (0x4003cf6bb0) (0x4007dd4b40) Stream removed, broadcasting: 1 I0113 08:17:52.319794 10 log.go:181] (0x4003cf6bb0) Go away received I0113 08:17:52.320227 10 log.go:181] (0x4003cf6bb0) (0x4007dd4b40) Stream removed, broadcasting: 1 I0113 08:17:52.320420 10 log.go:181] (0x4003cf6bb0) (0x400283a3c0) Stream removed, broadcasting: 3 I0113 08:17:52.320569 10 log.go:181] (0x4003cf6bb0) (0x4007dd4be0) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54321 Jan 13 08:17:52.321: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.18.0.13 http://127.0.0.1:54321/hostname] Namespace:sched-pred-428 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 08:17:52.321: INFO: >>> kubeConfig: /root/.kube/config I0113 08:17:52.402305 10 log.go:181] (0x40045e0bb0) (0x400283a6e0) Create stream I0113 08:17:52.402491 10 log.go:181] (0x40045e0bb0) (0x400283a6e0) Stream added, broadcasting: 1 I0113 08:17:52.405830 10 log.go:181] (0x40045e0bb0) Reply frame received for 1 I0113 08:17:52.405975 10 log.go:181] (0x40045e0bb0) (0x400072a5a0) Create stream I0113 08:17:52.406044 10 log.go:181] (0x40045e0bb0) (0x400072a5a0) Stream added, broadcasting: 3 I0113 08:17:52.407273 10 log.go:181] (0x40045e0bb0) Reply frame received for 3 I0113 08:17:52.407395 10 log.go:181] (0x40045e0bb0) (0x400072a640) Create stream I0113 08:17:52.407457 10 log.go:181] (0x40045e0bb0) (0x400072a640) Stream added, broadcasting: 5 I0113 08:17:52.408692 10 log.go:181] (0x40045e0bb0) Reply frame received for 5 I0113 08:17:52.463670 10 log.go:181] (0x40045e0bb0) Data frame received for 5 I0113 08:17:52.463866 10 log.go:181] (0x400072a640) (5) Data frame handling I0113 08:17:52.464076 10 log.go:181] (0x400072a640) (5) Data frame sent I0113 08:17:52.464191 10 log.go:181] (0x40045e0bb0) Data frame received for 5 I0113 08:17:52.464288 10 log.go:181] (0x400072a640) (5) Data frame handling I0113 08:17:52.464443 10 log.go:181] (0x400072a640) (5) Data frame sent I0113 08:17:52.464568 10 log.go:181] (0x40045e0bb0) Data frame received for 3 I0113 08:17:52.464699 10 log.go:181] (0x400072a5a0) (3) Data frame handling I0113 08:17:52.464817 10 log.go:181] (0x400072a5a0) (3) Data frame sent I0113 08:17:52.465116 10 log.go:181] (0x40045e0bb0) Data frame received for 5 I0113 08:17:52.465251 10 log.go:181] (0x400072a640) (5) Data frame handling I0113 08:17:52.465367 10 log.go:181] (0x400072a640) (5) Data frame sent I0113 08:17:52.465509 10 log.go:181] (0x40045e0bb0) Data frame received for 5 I0113 08:17:52.465647 10 log.go:181] (0x400072a640) (5) Data frame handling I0113 08:17:52.465766 10 log.go:181] (0x40045e0bb0) Data frame received for 3 I0113 08:17:52.465923 10 log.go:181] (0x400072a5a0) (3) Data frame handling I0113 08:17:52.466053 10 log.go:181] (0x400072a640) (5) Data frame sent I0113 08:17:52.466198 10 log.go:181] (0x40045e0bb0) Data frame received for 5 I0113 08:17:52.466352 10 log.go:181] (0x400072a640) (5) Data frame handling I0113 08:17:52.466488 10 log.go:181] (0x400072a640) (5) Data frame sent I0113 08:17:52.466605 10 log.go:181] (0x40045e0bb0) Data frame received for 5 I0113 08:17:52.466729 10 log.go:181] (0x400072a640) (5) Data frame handling I0113 08:17:52.466858 10 log.go:181] (0x400072a640) (5) Data frame sent I0113 08:17:52.466967 10 log.go:181] (0x40045e0bb0) Data frame received for 5 I0113 08:17:52.467123 10 log.go:181] (0x400072a640) (5) Data frame handling I0113 08:17:52.467307 10 log.go:181] (0x400072a640) (5) Data frame sent I0113 08:17:52.467450 10 log.go:181] (0x40045e0bb0) Data frame received for 5 I0113 08:17:52.467604 10 log.go:181] (0x400072a640) (5) Data frame handling I0113 08:17:52.467790 10 log.go:181] (0x40045e0bb0) Data frame received for 1 I0113 08:17:52.467966 10 log.go:181] (0x400283a6e0) (1) Data frame handling I0113 08:17:52.468143 10 log.go:181] (0x400072a640) (5) Data frame sent I0113 08:17:52.468356 10 log.go:181] (0x40045e0bb0) Data frame received for 5 I0113 08:17:52.468493 10 log.go:181] (0x400072a640) (5) Data frame handling I0113 08:17:52.468660 10 log.go:181] (0x400283a6e0) (1) Data frame sent I0113 08:17:52.468984 10 log.go:181] (0x40045e0bb0) (0x400283a6e0) Stream removed, broadcasting: 1 I0113 08:17:52.469227 10 log.go:181] (0x400072a640) (5) Data frame sent I0113 08:17:52.469398 10 log.go:181] (0x40045e0bb0) Data frame received for 5 I0113 08:17:52.469541 10 log.go:181] (0x400072a640) (5) Data frame handling I0113 08:17:52.469719 10 log.go:181] (0x400072a640) (5) Data frame sent I0113 08:17:52.469838 10 log.go:181] (0x40045e0bb0) Data frame received for 5 I0113 08:17:52.469924 10 log.go:181] (0x400072a640) (5) Data frame handling I0113 08:17:52.470017 10 log.go:181] (0x40045e0bb0) Go away received I0113 08:17:52.470126 10 log.go:181] (0x40045e0bb0) (0x400283a6e0) Stream removed, broadcasting: 1 I0113 08:17:52.470251 10 log.go:181] (0x40045e0bb0) (0x400072a5a0) Stream removed, broadcasting: 3 I0113 08:17:52.470381 10 log.go:181] (0x40045e0bb0) (0x400072a640) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.13, port: 54321 Jan 13 08:17:52.470: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://172.18.0.13:54321/hostname] Namespace:sched-pred-428 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 08:17:52.470: INFO: >>> kubeConfig: /root/.kube/config I0113 08:17:52.536063 10 log.go:181] (0x40008af970) (0x400072ab40) Create stream I0113 08:17:52.536254 10 log.go:181] (0x40008af970) (0x400072ab40) Stream added, broadcasting: 1 I0113 08:17:52.540281 10 log.go:181] (0x40008af970) Reply frame received for 1 I0113 08:17:52.540415 10 log.go:181] (0x40008af970) (0x400072ad20) Create stream I0113 08:17:52.540479 10 log.go:181] (0x40008af970) (0x400072ad20) Stream added, broadcasting: 3 I0113 08:17:52.541858 10 log.go:181] (0x40008af970) Reply frame received for 3 I0113 08:17:52.542004 10 log.go:181] (0x40008af970) (0x40011e2320) Create stream I0113 08:17:52.542089 10 log.go:181] (0x40008af970) (0x40011e2320) Stream added, broadcasting: 5 I0113 08:17:52.543544 10 log.go:181] (0x40008af970) Reply frame received for 5 I0113 08:17:52.626104 10 log.go:181] (0x40008af970) Data frame received for 5 I0113 08:17:52.626437 10 log.go:181] (0x40011e2320) (5) Data frame handling I0113 08:17:52.626620 10 log.go:181] (0x40008af970) Data frame received for 3 I0113 08:17:52.626809 10 log.go:181] (0x400072ad20) (3) Data frame handling I0113 08:17:52.626984 10 log.go:181] (0x400072ad20) (3) Data frame sent I0113 08:17:52.627120 10 log.go:181] (0x40008af970) Data frame received for 3 I0113 08:17:52.627271 10 log.go:181] (0x400072ad20) (3) Data frame handling I0113 08:17:52.627444 10 log.go:181] (0x40011e2320) (5) Data frame sent I0113 08:17:52.627579 10 log.go:181] (0x40008af970) Data frame received for 5 I0113 08:17:52.627698 10 log.go:181] (0x40011e2320) (5) Data frame handling I0113 08:17:52.627836 10 log.go:181] (0x40011e2320) (5) Data frame sent I0113 08:17:52.627984 10 log.go:181] (0x40008af970) Data frame received for 5 I0113 08:17:52.628141 10 log.go:181] (0x40011e2320) (5) Data frame handling I0113 08:17:52.628308 10 log.go:181] (0x40011e2320) (5) Data frame sent I0113 08:17:52.628450 10 log.go:181] (0x40008af970) Data frame received for 5 I0113 08:17:52.628576 10 log.go:181] (0x40011e2320) (5) Data frame handling I0113 08:17:52.628720 10 log.go:181] (0x40011e2320) (5) Data frame sent I0113 08:17:52.628950 10 log.go:181] (0x40008af970) Data frame received for 5 I0113 08:17:52.629060 10 log.go:181] (0x40011e2320) (5) Data frame handling I0113 08:17:52.629138 10 log.go:181] (0x40008af970) Data frame received for 1 I0113 08:17:52.629240 10 log.go:181] (0x400072ab40) (1) Data frame handling I0113 08:17:52.629332 10 log.go:181] (0x40011e2320) (5) Data frame sent I0113 08:17:52.629450 10 log.go:181] (0x40008af970) Data frame received for 5 I0113 08:17:52.629540 10 log.go:181] (0x40011e2320) (5) Data frame handling I0113 08:17:52.629657 10 log.go:181] (0x400072ab40) (1) Data frame sent I0113 08:17:52.629789 10 log.go:181] (0x40008af970) (0x400072ab40) Stream removed, broadcasting: 1 I0113 08:17:52.629949 10 log.go:181] (0x40011e2320) (5) Data frame sent I0113 08:17:52.630013 10 log.go:181] (0x40008af970) Data frame received for 5 I0113 08:17:52.630065 10 log.go:181] (0x40011e2320) (5) Data frame handling I0113 08:17:52.630135 10 log.go:181] (0x40011e2320) (5) Data frame sent I0113 08:17:52.630192 10 log.go:181] (0x40008af970) Data frame received for 5 I0113 08:17:52.630256 10 log.go:181] (0x40011e2320) (5) Data frame handling I0113 08:17:52.630356 10 log.go:181] (0x40008af970) Go away received I0113 08:17:52.630451 10 log.go:181] (0x40008af970) (0x400072ab40) Stream removed, broadcasting: 1 I0113 08:17:52.630590 10 log.go:181] (0x40008af970) (0x400072ad20) Stream removed, broadcasting: 3 I0113 08:17:52.630712 10 log.go:181] (0x40008af970) (0x40011e2320) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.13, port: 54321 UDP Jan 13 08:17:52.630: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 172.18.0.13 54321] Namespace:sched-pred-428 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 08:17:52.631: INFO: >>> kubeConfig: /root/.kube/config I0113 08:17:52.693224 10 log.go:181] (0x4000ca7130) (0x400072b2c0) Create stream I0113 08:17:52.693441 10 log.go:181] (0x4000ca7130) (0x400072b2c0) Stream added, broadcasting: 1 I0113 08:17:52.697859 10 log.go:181] (0x4000ca7130) Reply frame received for 1 I0113 08:17:52.698187 10 log.go:181] (0x4000ca7130) (0x4007dd4d20) Create stream I0113 08:17:52.698368 10 log.go:181] (0x4000ca7130) (0x4007dd4d20) Stream added, broadcasting: 3 I0113 08:17:52.700029 10 log.go:181] (0x4000ca7130) Reply frame received for 3 I0113 08:17:52.700211 10 log.go:181] (0x4000ca7130) (0x40011e23c0) Create stream I0113 08:17:52.700342 10 log.go:181] (0x4000ca7130) (0x40011e23c0) Stream added, broadcasting: 5 I0113 08:17:52.702015 10 log.go:181] (0x4000ca7130) Reply frame received for 5 I0113 08:17:57.769704 10 log.go:181] (0x4000ca7130) Data frame received for 5 I0113 08:17:57.769955 10 log.go:181] (0x40011e23c0) (5) Data frame handling I0113 08:17:57.770088 10 log.go:181] (0x40011e23c0) (5) Data frame sent I0113 08:17:57.770175 10 log.go:181] (0x4000ca7130) Data frame received for 5 I0113 08:17:57.770299 10 log.go:181] (0x4000ca7130) Data frame received for 3 I0113 08:17:57.770505 10 log.go:181] (0x4007dd4d20) (3) Data frame handling I0113 08:17:57.770749 10 log.go:181] (0x40011e23c0) (5) Data frame handling I0113 08:17:57.771148 10 log.go:181] (0x4000ca7130) Data frame received for 1 I0113 08:17:57.771317 10 log.go:181] (0x400072b2c0) (1) Data frame handling I0113 08:17:57.771500 10 log.go:181] (0x400072b2c0) (1) Data frame sent I0113 08:17:57.771682 10 log.go:181] (0x4000ca7130) (0x400072b2c0) Stream removed, broadcasting: 1 I0113 08:17:57.771905 10 log.go:181] (0x4000ca7130) Go away received I0113 08:17:57.772201 10 log.go:181] (0x4000ca7130) (0x400072b2c0) Stream removed, broadcasting: 1 I0113 08:17:57.772353 10 log.go:181] (0x4000ca7130) (0x4007dd4d20) Stream removed, broadcasting: 3 I0113 08:17:57.772506 10 log.go:181] (0x4000ca7130) (0x40011e23c0) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54321 Jan 13 08:17:57.772: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.18.0.13 http://127.0.0.1:54321/hostname] Namespace:sched-pred-428 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 08:17:57.772: INFO: >>> kubeConfig: /root/.kube/config I0113 08:17:57.837412 10 log.go:181] (0x4000ca7970) (0x400072b540) Create stream I0113 08:17:57.837579 10 log.go:181] (0x4000ca7970) (0x400072b540) Stream added, broadcasting: 1 I0113 08:17:57.841790 10 log.go:181] (0x4000ca7970) Reply frame received for 1 I0113 08:17:57.842074 10 log.go:181] (0x4000ca7970) (0x4007dd4e60) Create stream I0113 08:17:57.842182 10 log.go:181] (0x4000ca7970) (0x4007dd4e60) Stream added, broadcasting: 3 I0113 08:17:57.843766 10 log.go:181] (0x4000ca7970) Reply frame received for 3 I0113 08:17:57.843907 10 log.go:181] (0x4000ca7970) (0x400072b5e0) Create stream I0113 08:17:57.843986 10 log.go:181] (0x4000ca7970) (0x400072b5e0) Stream added, broadcasting: 5 I0113 08:17:57.845278 10 log.go:181] (0x4000ca7970) Reply frame received for 5 I0113 08:17:57.905194 10 log.go:181] (0x4000ca7970) Data frame received for 5 I0113 08:17:57.905530 10 log.go:181] (0x400072b5e0) (5) Data frame handling I0113 08:17:57.905727 10 log.go:181] (0x400072b5e0) (5) Data frame sent I0113 08:17:57.905915 10 log.go:181] (0x4000ca7970) Data frame received for 5 I0113 08:17:57.906073 10 log.go:181] (0x400072b5e0) (5) Data frame handling I0113 08:17:57.906270 10 log.go:181] (0x400072b5e0) (5) Data frame sent I0113 08:17:57.909117 10 log.go:181] (0x4000ca7970) Data frame received for 5 I0113 08:17:57.909305 10 log.go:181] (0x400072b5e0) (5) Data frame handling I0113 08:17:57.909415 10 log.go:181] (0x4000ca7970) Data frame received for 3 I0113 08:17:57.909541 10 log.go:181] (0x4000ca7970) Data frame received for 1 I0113 08:17:57.909690 10 log.go:181] (0x400072b540) (1) Data frame handling I0113 08:17:57.909802 10 log.go:181] (0x4007dd4e60) (3) Data frame handling I0113 08:17:57.909953 10 log.go:181] (0x400072b5e0) (5) Data frame sent I0113 08:17:57.910081 10 log.go:181] (0x4000ca7970) Data frame received for 5 I0113 08:17:57.910155 10 log.go:181] (0x400072b5e0) (5) Data frame handling I0113 08:17:57.910313 10 log.go:181] (0x400072b540) (1) Data frame sent I0113 08:17:57.910472 10 log.go:181] (0x4000ca7970) (0x400072b540) Stream removed, broadcasting: 1 I0113 08:17:57.910655 10 log.go:181] (0x4007dd4e60) (3) Data frame sent I0113 08:17:57.910773 10 log.go:181] (0x4000ca7970) Data frame received for 3 I0113 08:17:57.910869 10 log.go:181] (0x4007dd4e60) (3) Data frame handling I0113 08:17:57.911005 10 log.go:181] (0x4000ca7970) Go away received I0113 08:17:57.911091 10 log.go:181] (0x4000ca7970) (0x400072b540) Stream removed, broadcasting: 1 I0113 08:17:57.911220 10 log.go:181] (0x4000ca7970) (0x4007dd4e60) Stream removed, broadcasting: 3 I0113 08:17:57.911317 10 log.go:181] (0x4000ca7970) (0x400072b5e0) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.13, port: 54321 Jan 13 08:17:57.911: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://172.18.0.13:54321/hostname] Namespace:sched-pred-428 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 08:17:57.911: INFO: >>> kubeConfig: /root/.kube/config I0113 08:17:57.966696 10 log.go:181] (0x4000ca7ce0) (0x400072ba40) Create stream I0113 08:17:57.966828 10 log.go:181] (0x4000ca7ce0) (0x400072ba40) Stream added, broadcasting: 1 I0113 08:17:57.970273 10 log.go:181] (0x4000ca7ce0) Reply frame received for 1 I0113 08:17:57.970415 10 log.go:181] (0x4000ca7ce0) (0x400072bb80) Create stream I0113 08:17:57.970479 10 log.go:181] (0x4000ca7ce0) (0x400072bb80) Stream added, broadcasting: 3 I0113 08:17:57.971663 10 log.go:181] (0x4000ca7ce0) Reply frame received for 3 I0113 08:17:57.971899 10 log.go:181] (0x4000ca7ce0) (0x4004334460) Create stream I0113 08:17:57.971995 10 log.go:181] (0x4000ca7ce0) (0x4004334460) Stream added, broadcasting: 5 I0113 08:17:57.973543 10 log.go:181] (0x4000ca7ce0) Reply frame received for 5 I0113 08:17:58.031718 10 log.go:181] (0x4000ca7ce0) Data frame received for 5 I0113 08:17:58.031954 10 log.go:181] (0x4004334460) (5) Data frame handling I0113 08:17:58.032120 10 log.go:181] (0x4000ca7ce0) Data frame received for 3 I0113 08:17:58.032234 10 log.go:181] (0x400072bb80) (3) Data frame handling I0113 08:17:58.032414 10 log.go:181] (0x4004334460) (5) Data frame sent I0113 08:17:58.032691 10 log.go:181] (0x4000ca7ce0) Data frame received for 5 I0113 08:17:58.032975 10 log.go:181] (0x4004334460) (5) Data frame handling I0113 08:17:58.033179 10 log.go:181] (0x400072bb80) (3) Data frame sent I0113 08:17:58.033330 10 log.go:181] (0x4000ca7ce0) Data frame received for 3 I0113 08:17:58.033458 10 log.go:181] (0x400072bb80) (3) Data frame handling I0113 08:17:58.033602 10 log.go:181] (0x4004334460) (5) Data frame sent I0113 08:17:58.033732 10 log.go:181] (0x4000ca7ce0) Data frame received for 5 I0113 08:17:58.033835 10 log.go:181] (0x4004334460) (5) Data frame handling I0113 08:17:58.033984 10 log.go:181] (0x4004334460) (5) Data frame sent I0113 08:17:58.034102 10 log.go:181] (0x4000ca7ce0) Data frame received for 5 I0113 08:17:58.034205 10 log.go:181] (0x4004334460) (5) Data frame handling I0113 08:17:58.034342 10 log.go:181] (0x4004334460) (5) Data frame sent I0113 08:17:58.034555 10 log.go:181] (0x4000ca7ce0) Data frame received for 5 I0113 08:17:58.034678 10 log.go:181] (0x4004334460) (5) Data frame handling I0113 08:17:58.034856 10 log.go:181] (0x4004334460) (5) Data frame sent I0113 08:17:58.035106 10 log.go:181] (0x4000ca7ce0) Data frame received for 5 I0113 08:17:58.035264 10 log.go:181] (0x4004334460) (5) Data frame handling I0113 08:17:58.035410 10 log.go:181] (0x4000ca7ce0) Data frame received for 1 I0113 08:17:58.035556 10 log.go:181] (0x400072ba40) (1) Data frame handling I0113 08:17:58.035695 10 log.go:181] (0x400072ba40) (1) Data frame sent I0113 08:17:58.035845 10 log.go:181] (0x4000ca7ce0) (0x400072ba40) Stream removed, broadcasting: 1 I0113 08:17:58.036024 10 log.go:181] (0x4000ca7ce0) Go away received I0113 08:17:58.036426 10 log.go:181] (0x4000ca7ce0) (0x400072ba40) Stream removed, broadcasting: 1 I0113 08:17:58.036576 10 log.go:181] (0x4000ca7ce0) (0x400072bb80) Stream removed, broadcasting: 3 I0113 08:17:58.036747 10 log.go:181] (0x4000ca7ce0) (0x4004334460) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.13, port: 54321 UDP Jan 13 08:17:58.037: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 172.18.0.13 54321] Namespace:sched-pred-428 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 08:17:58.037: INFO: >>> kubeConfig: /root/.kube/config I0113 08:17:58.093283 10 log.go:181] (0x4003cf71e0) (0x4007dd5040) Create stream I0113 08:17:58.093417 10 log.go:181] (0x4003cf71e0) (0x4007dd5040) Stream added, broadcasting: 1 I0113 08:17:58.096920 10 log.go:181] (0x4003cf71e0) Reply frame received for 1 I0113 08:17:58.097142 10 log.go:181] (0x4003cf71e0) (0x4007dd50e0) Create stream I0113 08:17:58.097233 10 log.go:181] (0x4003cf71e0) (0x4007dd50e0) Stream added, broadcasting: 3 I0113 08:17:58.098688 10 log.go:181] (0x4003cf71e0) Reply frame received for 3 I0113 08:17:58.098802 10 log.go:181] (0x4003cf71e0) (0x40011e26e0) Create stream I0113 08:17:58.098859 10 log.go:181] (0x4003cf71e0) (0x40011e26e0) Stream added, broadcasting: 5 I0113 08:17:58.099944 10 log.go:181] (0x4003cf71e0) Reply frame received for 5 I0113 08:18:03.171817 10 log.go:181] (0x4003cf71e0) Data frame received for 5 I0113 08:18:03.171966 10 log.go:181] (0x40011e26e0) (5) Data frame handling I0113 08:18:03.172064 10 log.go:181] (0x40011e26e0) (5) Data frame sent I0113 08:18:03.172157 10 log.go:181] (0x4003cf71e0) Data frame received for 5 I0113 08:18:03.172241 10 log.go:181] (0x40011e26e0) (5) Data frame handling I0113 08:18:03.172330 10 log.go:181] (0x4003cf71e0) Data frame received for 3 I0113 08:18:03.172407 10 log.go:181] (0x4007dd50e0) (3) Data frame handling I0113 08:18:03.173795 10 log.go:181] (0x4003cf71e0) Data frame received for 1 I0113 08:18:03.173939 10 log.go:181] (0x4007dd5040) (1) Data frame handling I0113 08:18:03.174080 10 log.go:181] (0x4007dd5040) (1) Data frame sent I0113 08:18:03.174187 10 log.go:181] (0x4003cf71e0) (0x4007dd5040) Stream removed, broadcasting: 1 I0113 08:18:03.174321 10 log.go:181] (0x4003cf71e0) Go away received I0113 08:18:03.174666 10 log.go:181] (0x4003cf71e0) (0x4007dd5040) Stream removed, broadcasting: 1 I0113 08:18:03.174836 10 log.go:181] (0x4003cf71e0) (0x4007dd50e0) Stream removed, broadcasting: 3 I0113 08:18:03.174970 10 log.go:181] (0x4003cf71e0) (0x40011e26e0) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54321 Jan 13 08:18:03.175: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.18.0.13 http://127.0.0.1:54321/hostname] Namespace:sched-pred-428 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 08:18:03.175: INFO: >>> kubeConfig: /root/.kube/config I0113 08:18:03.235032 10 log.go:181] (0x40045e13f0) (0x400283a960) Create stream I0113 08:18:03.235184 10 log.go:181] (0x40045e13f0) (0x400283a960) Stream added, broadcasting: 1 I0113 08:18:03.239113 10 log.go:181] (0x40045e13f0) Reply frame received for 1 I0113 08:18:03.239336 10 log.go:181] (0x40045e13f0) (0x400072bcc0) Create stream I0113 08:18:03.239474 10 log.go:181] (0x40045e13f0) (0x400072bcc0) Stream added, broadcasting: 3 I0113 08:18:03.241508 10 log.go:181] (0x40045e13f0) Reply frame received for 3 I0113 08:18:03.241740 10 log.go:181] (0x40045e13f0) (0x400072bd60) Create stream I0113 08:18:03.241853 10 log.go:181] (0x40045e13f0) (0x400072bd60) Stream added, broadcasting: 5 I0113 08:18:03.243372 10 log.go:181] (0x40045e13f0) Reply frame received for 5 I0113 08:18:03.339612 10 log.go:181] (0x40045e13f0) Data frame received for 5 I0113 08:18:03.339808 10 log.go:181] (0x400072bd60) (5) Data frame handling I0113 08:18:03.339960 10 log.go:181] (0x400072bd60) (5) Data frame sent I0113 08:18:03.340132 10 log.go:181] (0x40045e13f0) Data frame received for 3 I0113 08:18:03.340323 10 log.go:181] (0x400072bcc0) (3) Data frame handling I0113 08:18:03.340536 10 log.go:181] (0x40045e13f0) Data frame received for 5 I0113 08:18:03.340763 10 log.go:181] (0x400072bd60) (5) Data frame handling I0113 08:18:03.341122 10 log.go:181] (0x400072bcc0) (3) Data frame sent I0113 08:18:03.341326 10 log.go:181] (0x40045e13f0) Data frame received for 3 I0113 08:18:03.341442 10 log.go:181] (0x400072bcc0) (3) Data frame handling I0113 08:18:03.341582 10 log.go:181] (0x400072bd60) (5) Data frame sent I0113 08:18:03.341692 10 log.go:181] (0x40045e13f0) Data frame received for 5 I0113 08:18:03.341771 10 log.go:181] (0x400072bd60) (5) Data frame handling I0113 08:18:03.341885 10 log.go:181] (0x400072bd60) (5) Data frame sent I0113 08:18:03.342008 10 log.go:181] (0x40045e13f0) Data frame received for 5 I0113 08:18:03.342092 10 log.go:181] (0x400072bd60) (5) Data frame handling I0113 08:18:03.342208 10 log.go:181] (0x400072bd60) (5) Data frame sent I0113 08:18:03.342331 10 log.go:181] (0x40045e13f0) Data frame received for 5 I0113 08:18:03.342429 10 log.go:181] (0x400072bd60) (5) Data frame handling I0113 08:18:03.342549 10 log.go:181] (0x400072bd60) (5) Data frame sent I0113 08:18:03.342696 10 log.go:181] (0x40045e13f0) Data frame received for 5 I0113 08:18:03.342782 10 log.go:181] (0x400072bd60) (5) Data frame handling I0113 08:18:03.342872 10 log.go:181] (0x400072bd60) (5) Data frame sent I0113 08:18:03.342959 10 log.go:181] (0x40045e13f0) Data frame received for 5 I0113 08:18:03.343039 10 log.go:181] (0x400072bd60) (5) Data frame handling I0113 08:18:03.343148 10 log.go:181] (0x40045e13f0) Data frame received for 1 I0113 08:18:03.343217 10 log.go:181] (0x400283a960) (1) Data frame handling I0113 08:18:03.343286 10 log.go:181] (0x400283a960) (1) Data frame sent I0113 08:18:03.343373 10 log.go:181] (0x40045e13f0) (0x400283a960) Stream removed, broadcasting: 1 I0113 08:18:03.343458 10 log.go:181] (0x40045e13f0) Go away received I0113 08:18:03.343947 10 log.go:181] (0x40045e13f0) (0x400283a960) Stream removed, broadcasting: 1 I0113 08:18:03.344132 10 log.go:181] (0x40045e13f0) (0x400072bcc0) Stream removed, broadcasting: 3 I0113 08:18:03.344287 10 log.go:181] (0x40045e13f0) (0x400072bd60) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.13, port: 54321 Jan 13 08:18:03.344: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://172.18.0.13:54321/hostname] Namespace:sched-pred-428 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 08:18:03.344: INFO: >>> kubeConfig: /root/.kube/config I0113 08:18:03.407278 10 log.go:181] (0x40045e16b0) (0x400283ab40) Create stream I0113 08:18:03.407488 10 log.go:181] (0x40045e16b0) (0x400283ab40) Stream added, broadcasting: 1 I0113 08:18:03.412505 10 log.go:181] (0x40045e16b0) Reply frame received for 1 I0113 08:18:03.412771 10 log.go:181] (0x40045e16b0) (0x400072bea0) Create stream I0113 08:18:03.413035 10 log.go:181] (0x40045e16b0) (0x400072bea0) Stream added, broadcasting: 3 I0113 08:18:03.414595 10 log.go:181] (0x40045e16b0) Reply frame received for 3 I0113 08:18:03.414781 10 log.go:181] (0x40045e16b0) (0x4007dd5180) Create stream I0113 08:18:03.414890 10 log.go:181] (0x40045e16b0) (0x4007dd5180) Stream added, broadcasting: 5 I0113 08:18:03.416617 10 log.go:181] (0x40045e16b0) Reply frame received for 5 I0113 08:18:03.493336 10 log.go:181] (0x40045e16b0) Data frame received for 5 I0113 08:18:03.493547 10 log.go:181] (0x4007dd5180) (5) Data frame handling I0113 08:18:03.493705 10 log.go:181] (0x4007dd5180) (5) Data frame sent I0113 08:18:03.493848 10 log.go:181] (0x40045e16b0) Data frame received for 5 I0113 08:18:03.493939 10 log.go:181] (0x4007dd5180) (5) Data frame handling I0113 08:18:03.494039 10 log.go:181] (0x4007dd5180) (5) Data frame sent I0113 08:18:03.494127 10 log.go:181] (0x40045e16b0) Data frame received for 5 I0113 08:18:03.494225 10 log.go:181] (0x4007dd5180) (5) Data frame handling I0113 08:18:03.494327 10 log.go:181] (0x4007dd5180) (5) Data frame sent I0113 08:18:03.494416 10 log.go:181] (0x40045e16b0) Data frame received for 5 I0113 08:18:03.494497 10 log.go:181] (0x4007dd5180) (5) Data frame handling I0113 08:18:03.494622 10 log.go:181] (0x4007dd5180) (5) Data frame sent I0113 08:18:03.494820 10 log.go:181] (0x40045e16b0) Data frame received for 5 I0113 08:18:03.494952 10 log.go:181] (0x4007dd5180) (5) Data frame handling I0113 08:18:03.495099 10 log.go:181] (0x4007dd5180) (5) Data frame sent I0113 08:18:03.495233 10 log.go:181] (0x40045e16b0) Data frame received for 5 I0113 08:18:03.495459 10 log.go:181] (0x40045e16b0) Data frame received for 3 I0113 08:18:03.495645 10 log.go:181] (0x400072bea0) (3) Data frame handling I0113 08:18:03.495763 10 log.go:181] (0x400072bea0) (3) Data frame sent I0113 08:18:03.495848 10 log.go:181] (0x40045e16b0) Data frame received for 3 I0113 08:18:03.495915 10 log.go:181] (0x400072bea0) (3) Data frame handling I0113 08:18:03.496013 10 log.go:181] (0x4007dd5180) (5) Data frame handling I0113 08:18:03.496138 10 log.go:181] (0x4007dd5180) (5) Data frame sent I0113 08:18:03.496219 10 log.go:181] (0x40045e16b0) Data frame received for 5 I0113 08:18:03.496332 10 log.go:181] (0x4007dd5180) (5) Data frame handling I0113 08:18:03.496479 10 log.go:181] (0x4007dd5180) (5) Data frame sent I0113 08:18:03.496601 10 log.go:181] (0x40045e16b0) Data frame received for 5 I0113 08:18:03.496709 10 log.go:181] (0x4007dd5180) (5) Data frame handling I0113 08:18:03.497098 10 log.go:181] (0x40045e16b0) Data frame received for 1 I0113 08:18:03.497195 10 log.go:181] (0x400283ab40) (1) Data frame handling I0113 08:18:03.497335 10 log.go:181] (0x400283ab40) (1) Data frame sent I0113 08:18:03.497480 10 log.go:181] (0x40045e16b0) (0x400283ab40) Stream removed, broadcasting: 1 I0113 08:18:03.497597 10 log.go:181] (0x40045e16b0) Go away received I0113 08:18:03.497914 10 log.go:181] (0x40045e16b0) (0x400283ab40) Stream removed, broadcasting: 1 I0113 08:18:03.498017 10 log.go:181] (0x40045e16b0) (0x400072bea0) Stream removed, broadcasting: 3 I0113 08:18:03.498098 10 log.go:181] (0x40045e16b0) (0x4007dd5180) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.13, port: 54321 UDP Jan 13 08:18:03.498: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 172.18.0.13 54321] Namespace:sched-pred-428 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 08:18:03.498: INFO: >>> kubeConfig: /root/.kube/config I0113 08:18:03.551785 10 log.go:181] (0x400583e000) (0x40011e2aa0) Create stream I0113 08:18:03.551917 10 log.go:181] (0x400583e000) (0x40011e2aa0) Stream added, broadcasting: 1 I0113 08:18:03.555859 10 log.go:181] (0x400583e000) Reply frame received for 1 I0113 08:18:03.556074 10 log.go:181] (0x400583e000) (0x400283ac80) Create stream I0113 08:18:03.556158 10 log.go:181] (0x400583e000) (0x400283ac80) Stream added, broadcasting: 3 I0113 08:18:03.557846 10 log.go:181] (0x400583e000) Reply frame received for 3 I0113 08:18:03.558008 10 log.go:181] (0x400583e000) (0x4007dd5220) Create stream I0113 08:18:03.558100 10 log.go:181] (0x400583e000) (0x4007dd5220) Stream added, broadcasting: 5 I0113 08:18:03.559480 10 log.go:181] (0x400583e000) Reply frame received for 5 I0113 08:18:08.616791 10 log.go:181] (0x400583e000) Data frame received for 5 I0113 08:18:08.617118 10 log.go:181] (0x4007dd5220) (5) Data frame handling I0113 08:18:08.617280 10 log.go:181] (0x400583e000) Data frame received for 3 I0113 08:18:08.617482 10 log.go:181] (0x400283ac80) (3) Data frame handling I0113 08:18:08.617623 10 log.go:181] (0x4007dd5220) (5) Data frame sent I0113 08:18:08.617748 10 log.go:181] (0x400583e000) Data frame received for 5 I0113 08:18:08.617886 10 log.go:181] (0x4007dd5220) (5) Data frame handling I0113 08:18:08.618948 10 log.go:181] (0x400583e000) Data frame received for 1 I0113 08:18:08.619111 10 log.go:181] (0x40011e2aa0) (1) Data frame handling I0113 08:18:08.619274 10 log.go:181] (0x40011e2aa0) (1) Data frame sent I0113 08:18:08.619464 10 log.go:181] (0x400583e000) (0x40011e2aa0) Stream removed, broadcasting: 1 I0113 08:18:08.619690 10 log.go:181] (0x400583e000) Go away received I0113 08:18:08.620190 10 log.go:181] (0x400583e000) (0x40011e2aa0) Stream removed, broadcasting: 1 I0113 08:18:08.620363 10 log.go:181] (0x400583e000) (0x400283ac80) Stream removed, broadcasting: 3 I0113 08:18:08.620481 10 log.go:181] (0x400583e000) (0x4007dd5220) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54321 Jan 13 08:18:08.620: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.18.0.13 http://127.0.0.1:54321/hostname] Namespace:sched-pred-428 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 08:18:08.620: INFO: >>> kubeConfig: /root/.kube/config I0113 08:18:08.679720 10 log.go:181] (0x40045e1970) (0x400283ae60) Create stream I0113 08:18:08.679856 10 log.go:181] (0x40045e1970) (0x400283ae60) Stream added, broadcasting: 1 I0113 08:18:08.683565 10 log.go:181] (0x40045e1970) Reply frame received for 1 I0113 08:18:08.683714 10 log.go:181] (0x40045e1970) (0x4007dd52c0) Create stream I0113 08:18:08.683785 10 log.go:181] (0x40045e1970) (0x4007dd52c0) Stream added, broadcasting: 3 I0113 08:18:08.684966 10 log.go:181] (0x40045e1970) Reply frame received for 3 I0113 08:18:08.685111 10 log.go:181] (0x40045e1970) (0x4007dd5360) Create stream I0113 08:18:08.685196 10 log.go:181] (0x40045e1970) (0x4007dd5360) Stream added, broadcasting: 5 I0113 08:18:08.686354 10 log.go:181] (0x40045e1970) Reply frame received for 5 I0113 08:18:08.782541 10 log.go:181] (0x40045e1970) Data frame received for 5 I0113 08:18:08.782702 10 log.go:181] (0x4007dd5360) (5) Data frame handling I0113 08:18:08.782831 10 log.go:181] (0x4007dd5360) (5) Data frame sent I0113 08:18:08.782925 10 log.go:181] (0x40045e1970) Data frame received for 5 I0113 08:18:08.783002 10 log.go:181] (0x4007dd5360) (5) Data frame handling I0113 08:18:08.783215 10 log.go:181] (0x40045e1970) Data frame received for 3 I0113 08:18:08.783419 10 log.go:181] (0x4007dd52c0) (3) Data frame handling I0113 08:18:08.783553 10 log.go:181] (0x4007dd5360) (5) Data frame sent I0113 08:18:08.783657 10 log.go:181] (0x40045e1970) Data frame received for 5 I0113 08:18:08.783772 10 log.go:181] (0x4007dd5360) (5) Data frame handling I0113 08:18:08.783887 10 log.go:181] (0x4007dd52c0) (3) Data frame sent I0113 08:18:08.784027 10 log.go:181] (0x40045e1970) Data frame received for 3 I0113 08:18:08.784130 10 log.go:181] (0x4007dd52c0) (3) Data frame handling I0113 08:18:08.784286 10 log.go:181] (0x4007dd5360) (5) Data frame sent I0113 08:18:08.784443 10 log.go:181] (0x40045e1970) Data frame received for 5 I0113 08:18:08.784570 10 log.go:181] (0x4007dd5360) (5) Data frame handling I0113 08:18:08.784723 10 log.go:181] (0x4007dd5360) (5) Data frame sent I0113 08:18:08.784946 10 log.go:181] (0x40045e1970) Data frame received for 5 I0113 08:18:08.785072 10 log.go:181] (0x4007dd5360) (5) Data frame handling I0113 08:18:08.785192 10 log.go:181] (0x4007dd5360) (5) Data frame sent I0113 08:18:08.785300 10 log.go:181] (0x40045e1970) Data frame received for 5 I0113 08:18:08.785417 10 log.go:181] (0x4007dd5360) (5) Data frame handling I0113 08:18:08.785704 10 log.go:181] (0x4007dd5360) (5) Data frame sent I0113 08:18:08.785861 10 log.go:181] (0x40045e1970) Data frame received for 5 I0113 08:18:08.786064 10 log.go:181] (0x4007dd5360) (5) Data frame handling I0113 08:18:08.786205 10 log.go:181] (0x40045e1970) Data frame received for 1 I0113 08:18:08.786360 10 log.go:181] (0x400283ae60) (1) Data frame handling I0113 08:18:08.786489 10 log.go:181] (0x4007dd5360) (5) Data frame sent I0113 08:18:08.786623 10 log.go:181] (0x40045e1970) Data frame received for 5 I0113 08:18:08.786865 10 log.go:181] (0x400283ae60) (1) Data frame sent I0113 08:18:08.787100 10 log.go:181] (0x40045e1970) (0x400283ae60) Stream removed, broadcasting: 1 I0113 08:18:08.787247 10 log.go:181] (0x4007dd5360) (5) Data frame handling I0113 08:18:08.787388 10 log.go:181] (0x4007dd5360) (5) Data frame sent I0113 08:18:08.787551 10 log.go:181] (0x40045e1970) Data frame received for 5 I0113 08:18:08.787691 10 log.go:181] (0x4007dd5360) (5) Data frame handling I0113 08:18:08.787880 10 log.go:181] (0x40045e1970) Go away received I0113 08:18:08.788226 10 log.go:181] (0x40045e1970) (0x400283ae60) Stream removed, broadcasting: 1 I0113 08:18:08.788436 10 log.go:181] (0x40045e1970) (0x4007dd52c0) Stream removed, broadcasting: 3 I0113 08:18:08.788634 10 log.go:181] (0x40045e1970) (0x4007dd5360) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.13, port: 54321 Jan 13 08:18:08.788: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://172.18.0.13:54321/hostname] Namespace:sched-pred-428 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 08:18:08.789: INFO: >>> kubeConfig: /root/.kube/config I0113 08:18:08.849371 10 log.go:181] (0x4001a28580) (0x40004219a0) Create stream I0113 08:18:08.849545 10 log.go:181] (0x4001a28580) (0x40004219a0) Stream added, broadcasting: 1 I0113 08:18:08.853457 10 log.go:181] (0x4001a28580) Reply frame received for 1 I0113 08:18:08.853648 10 log.go:181] (0x4001a28580) (0x40043a4000) Create stream I0113 08:18:08.853718 10 log.go:181] (0x4001a28580) (0x40043a4000) Stream added, broadcasting: 3 I0113 08:18:08.854953 10 log.go:181] (0x4001a28580) Reply frame received for 3 I0113 08:18:08.855089 10 log.go:181] (0x4001a28580) (0x40043a40a0) Create stream I0113 08:18:08.855152 10 log.go:181] (0x4001a28580) (0x40043a40a0) Stream added, broadcasting: 5 I0113 08:18:08.856264 10 log.go:181] (0x4001a28580) Reply frame received for 5 I0113 08:18:08.917311 10 log.go:181] (0x4001a28580) Data frame received for 5 I0113 08:18:08.917462 10 log.go:181] (0x40043a40a0) (5) Data frame handling I0113 08:18:08.917607 10 log.go:181] (0x40043a40a0) (5) Data frame sent I0113 08:18:08.917765 10 log.go:181] (0x4001a28580) Data frame received for 5 I0113 08:18:08.917943 10 log.go:181] (0x40043a40a0) (5) Data frame handling I0113 08:18:08.918125 10 log.go:181] (0x40043a40a0) (5) Data frame sent I0113 08:18:08.918272 10 log.go:181] (0x4001a28580) Data frame received for 5 I0113 08:18:08.918424 10 log.go:181] (0x4001a28580) Data frame received for 3 I0113 08:18:08.918629 10 log.go:181] (0x40043a4000) (3) Data frame handling I0113 08:18:08.918724 10 log.go:181] (0x40043a40a0) (5) Data frame handling I0113 08:18:08.918863 10 log.go:181] (0x40043a40a0) (5) Data frame sent I0113 08:18:08.918972 10 log.go:181] (0x4001a28580) Data frame received for 5 I0113 08:18:08.919076 10 log.go:181] (0x40043a4000) (3) Data frame sent I0113 08:18:08.919218 10 log.go:181] (0x4001a28580) Data frame received for 3 I0113 08:18:08.919321 10 log.go:181] (0x40043a4000) (3) Data frame handling I0113 08:18:08.919498 10 log.go:181] (0x40043a40a0) (5) Data frame handling I0113 08:18:08.919699 10 log.go:181] (0x40043a40a0) (5) Data frame sent I0113 08:18:08.919853 10 log.go:181] (0x4001a28580) Data frame received for 1 I0113 08:18:08.920030 10 log.go:181] (0x40004219a0) (1) Data frame handling I0113 08:18:08.920149 10 log.go:181] (0x40004219a0) (1) Data frame sent I0113 08:18:08.920258 10 log.go:181] (0x4001a28580) (0x40004219a0) Stream removed, broadcasting: 1 I0113 08:18:08.920368 10 log.go:181] (0x4001a28580) Data frame received for 5 I0113 08:18:08.920477 10 log.go:181] (0x40043a40a0) (5) Data frame handling I0113 08:18:08.920612 10 log.go:181] (0x40043a40a0) (5) Data frame sent I0113 08:18:08.920752 10 log.go:181] (0x4001a28580) Data frame received for 5 I0113 08:18:08.920955 10 log.go:181] (0x40043a40a0) (5) Data frame handling I0113 08:18:08.921099 10 log.go:181] (0x40043a40a0) (5) Data frame sent I0113 08:18:08.921241 10 log.go:181] (0x4001a28580) Data frame received for 5 I0113 08:18:08.921339 10 log.go:181] (0x40043a40a0) (5) Data frame handling I0113 08:18:08.921469 10 log.go:181] (0x40043a40a0) (5) Data frame sent I0113 08:18:08.921610 10 log.go:181] (0x4001a28580) Data frame received for 5 I0113 08:18:08.921742 10 log.go:181] (0x40043a40a0) (5) Data frame handling I0113 08:18:08.921880 10 log.go:181] (0x40043a40a0) (5) Data frame sent I0113 08:18:08.922019 10 log.go:181] (0x4001a28580) Data frame received for 5 I0113 08:18:08.922115 10 log.go:181] (0x40043a40a0) (5) Data frame handling I0113 08:18:08.922242 10 log.go:181] (0x40043a40a0) (5) Data frame sent I0113 08:18:08.922380 10 log.go:181] (0x4001a28580) Data frame received for 5 I0113 08:18:08.922485 10 log.go:181] (0x40043a40a0) (5) Data frame handling I0113 08:18:08.922659 10 log.go:181] (0x4001a28580) Go away received I0113 08:18:08.922803 10 log.go:181] (0x4001a28580) (0x40004219a0) Stream removed, broadcasting: 1 I0113 08:18:08.922947 10 log.go:181] (0x4001a28580) (0x40043a4000) Stream removed, broadcasting: 3 I0113 08:18:08.923077 10 log.go:181] (0x4001a28580) (0x40043a40a0) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.13, port: 54321 UDP Jan 13 08:18:08.923: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 172.18.0.13 54321] Namespace:sched-pred-428 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 08:18:08.923: INFO: >>> kubeConfig: /root/.kube/config I0113 08:18:08.979697 10 log.go:181] (0x4000818fd0) (0x4002c803c0) Create stream I0113 08:18:08.979825 10 log.go:181] (0x4000818fd0) (0x4002c803c0) Stream added, broadcasting: 1 I0113 08:18:08.983711 10 log.go:181] (0x4000818fd0) Reply frame received for 1 I0113 08:18:08.983972 10 log.go:181] (0x4000818fd0) (0x4007dd5400) Create stream I0113 08:18:08.984093 10 log.go:181] (0x4000818fd0) (0x4007dd5400) Stream added, broadcasting: 3 I0113 08:18:08.986018 10 log.go:181] (0x4000818fd0) Reply frame received for 3 I0113 08:18:08.986166 10 log.go:181] (0x4000818fd0) (0x4002c80460) Create stream I0113 08:18:08.986236 10 log.go:181] (0x4000818fd0) (0x4002c80460) Stream added, broadcasting: 5 I0113 08:18:08.987757 10 log.go:181] (0x4000818fd0) Reply frame received for 5 I0113 08:18:14.050902 10 log.go:181] (0x4000818fd0) Data frame received for 5 I0113 08:18:14.051055 10 log.go:181] (0x4002c80460) (5) Data frame handling I0113 08:18:14.051150 10 log.go:181] (0x4002c80460) (5) Data frame sent I0113 08:18:14.051235 10 log.go:181] (0x4000818fd0) Data frame received for 5 I0113 08:18:14.051300 10 log.go:181] (0x4002c80460) (5) Data frame handling I0113 08:18:14.051418 10 log.go:181] (0x4000818fd0) Data frame received for 3 I0113 08:18:14.051561 10 log.go:181] (0x4007dd5400) (3) Data frame handling I0113 08:18:14.052962 10 log.go:181] (0x4000818fd0) Data frame received for 1 I0113 08:18:14.053089 10 log.go:181] (0x4002c803c0) (1) Data frame handling I0113 08:18:14.053217 10 log.go:181] (0x4002c803c0) (1) Data frame sent I0113 08:18:14.053342 10 log.go:181] (0x4000818fd0) (0x4002c803c0) Stream removed, broadcasting: 1 I0113 08:18:14.053722 10 log.go:181] (0x4000818fd0) Go away received I0113 08:18:14.053931 10 log.go:181] (0x4000818fd0) (0x4002c803c0) Stream removed, broadcasting: 1 I0113 08:18:14.054117 10 log.go:181] (0x4000818fd0) (0x4007dd5400) Stream removed, broadcasting: 3 I0113 08:18:14.054248 10 log.go:181] (0x4000818fd0) (0x4002c80460) Stream removed, broadcasting: 5 STEP: removing the label kubernetes.io/e2e-e7ac5ca8-5922-48d5-a2c8-4ba44ce9f42c off the node leguer-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-e7ac5ca8-5922-48d5-a2c8-4ba44ce9f42c [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:18:14.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-428" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 • [SLOW TEST:49.114 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":309,"completed":273,"skipped":4819,"failed":0} SSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:18:14.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap that has name configmap-test-emptyKey-d3415e69-1fa8-477f-ba7c-777228f9e91a [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:18:14.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5279" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":309,"completed":274,"skipped":4824,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:18:14.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-1250 [It] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a new StatefulSet Jan 13 08:18:14.459: INFO: Found 0 stateful pods, waiting for 3 Jan 13 08:18:24.472: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 13 08:18:24.472: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 13 08:18:24.472: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 13 08:18:34.470: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 13 08:18:34.470: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 13 08:18:34.470: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jan 13 08:18:34.495: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-1250 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 13 08:18:36.097: INFO: stderr: "I0113 08:18:35.947996 3737 log.go:181] (0x400003a0b0) (0x400029e0a0) Create stream\nI0113 08:18:35.950507 3737 log.go:181] (0x400003a0b0) (0x400029e0a0) Stream added, broadcasting: 1\nI0113 08:18:35.959046 3737 log.go:181] (0x400003a0b0) Reply frame received for 1\nI0113 08:18:35.959581 3737 log.go:181] (0x400003a0b0) (0x4000550000) Create stream\nI0113 08:18:35.959641 3737 log.go:181] (0x400003a0b0) (0x4000550000) Stream added, broadcasting: 3\nI0113 08:18:35.960785 3737 log.go:181] (0x400003a0b0) Reply frame received for 3\nI0113 08:18:35.961058 3737 log.go:181] (0x400003a0b0) (0x4000550820) Create stream\nI0113 08:18:35.961127 3737 log.go:181] (0x400003a0b0) (0x4000550820) Stream added, broadcasting: 5\nI0113 08:18:35.961937 3737 log.go:181] (0x400003a0b0) Reply frame received for 5\nI0113 08:18:36.043580 3737 log.go:181] (0x400003a0b0) Data frame received for 5\nI0113 08:18:36.043884 3737 log.go:181] (0x4000550820) (5) Data frame handling\nI0113 08:18:36.044567 3737 log.go:181] (0x4000550820) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0113 08:18:36.079437 3737 log.go:181] (0x400003a0b0) Data frame received for 3\nI0113 08:18:36.079638 3737 log.go:181] (0x4000550000) (3) Data frame handling\nI0113 08:18:36.079812 3737 log.go:181] (0x4000550000) (3) Data frame sent\nI0113 08:18:36.080043 3737 log.go:181] (0x400003a0b0) Data frame received for 3\nI0113 08:18:36.080193 3737 log.go:181] (0x4000550000) (3) Data frame handling\nI0113 08:18:36.080400 3737 log.go:181] (0x400003a0b0) Data frame received for 5\nI0113 08:18:36.080557 3737 log.go:181] (0x4000550820) (5) Data frame handling\nI0113 08:18:36.081541 3737 log.go:181] (0x400003a0b0) Data frame received for 1\nI0113 08:18:36.081645 3737 log.go:181] (0x400029e0a0) (1) Data frame handling\nI0113 08:18:36.081739 3737 log.go:181] (0x400029e0a0) (1) Data frame sent\nI0113 08:18:36.083368 3737 log.go:181] (0x400003a0b0) (0x400029e0a0) Stream removed, broadcasting: 1\nI0113 08:18:36.086432 3737 log.go:181] (0x400003a0b0) Go away received\nI0113 08:18:36.089807 3737 log.go:181] (0x400003a0b0) (0x400029e0a0) Stream removed, broadcasting: 1\nI0113 08:18:36.090077 3737 log.go:181] (0x400003a0b0) (0x4000550000) Stream removed, broadcasting: 3\nI0113 08:18:36.090286 3737 log.go:181] (0x400003a0b0) (0x4000550820) Stream removed, broadcasting: 5\n" Jan 13 08:18:36.099: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 13 08:18:36.099: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Jan 13 08:18:46.173: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Jan 13 08:18:56.245: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-1250 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 08:18:57.923: INFO: stderr: "I0113 08:18:57.802890 3757 log.go:181] (0x4000746b00) (0x4000c06460) Create stream\nI0113 08:18:57.809415 3757 log.go:181] (0x4000746b00) (0x4000c06460) Stream added, broadcasting: 1\nI0113 08:18:57.823372 3757 log.go:181] (0x4000746b00) Reply frame received for 1\nI0113 08:18:57.824693 3757 log.go:181] (0x4000746b00) (0x4000dbd4a0) Create stream\nI0113 08:18:57.824812 3757 log.go:181] (0x4000746b00) (0x4000dbd4a0) Stream added, broadcasting: 3\nI0113 08:18:57.826812 3757 log.go:181] (0x4000746b00) Reply frame received for 3\nI0113 08:18:57.827412 3757 log.go:181] (0x4000746b00) (0x4000aaa000) Create stream\nI0113 08:18:57.827523 3757 log.go:181] (0x4000746b00) (0x4000aaa000) Stream added, broadcasting: 5\nI0113 08:18:57.829198 3757 log.go:181] (0x4000746b00) Reply frame received for 5\nI0113 08:18:57.900912 3757 log.go:181] (0x4000746b00) Data frame received for 5\nI0113 08:18:57.901677 3757 log.go:181] (0x4000746b00) Data frame received for 3\nI0113 08:18:57.901930 3757 log.go:181] (0x4000dbd4a0) (3) Data frame handling\nI0113 08:18:57.902574 3757 log.go:181] (0x4000aaa000) (5) Data frame handling\nI0113 08:18:57.902874 3757 log.go:181] (0x4000746b00) Data frame received for 1\nI0113 08:18:57.903017 3757 log.go:181] (0x4000c06460) (1) Data frame handling\nI0113 08:18:57.903436 3757 log.go:181] (0x4000dbd4a0) (3) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0113 08:18:57.904356 3757 log.go:181] (0x4000746b00) Data frame received for 3\nI0113 08:18:57.904415 3757 log.go:181] (0x4000dbd4a0) (3) Data frame handling\nI0113 08:18:57.904992 3757 log.go:181] (0x4000aaa000) (5) Data frame sent\nI0113 08:18:57.905099 3757 log.go:181] (0x4000c06460) (1) Data frame sent\nI0113 08:18:57.905287 3757 log.go:181] (0x4000746b00) Data frame received for 5\nI0113 08:18:57.905365 3757 log.go:181] (0x4000aaa000) (5) Data frame handling\nI0113 08:18:57.906032 3757 log.go:181] (0x4000746b00) (0x4000c06460) Stream removed, broadcasting: 1\nI0113 08:18:57.910231 3757 log.go:181] (0x4000746b00) Go away received\nI0113 08:18:57.913414 3757 log.go:181] (0x4000746b00) (0x4000c06460) Stream removed, broadcasting: 1\nI0113 08:18:57.913897 3757 log.go:181] (0x4000746b00) (0x4000dbd4a0) Stream removed, broadcasting: 3\nI0113 08:18:57.914069 3757 log.go:181] (0x4000746b00) (0x4000aaa000) Stream removed, broadcasting: 5\n" Jan 13 08:18:57.924: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 13 08:18:57.925: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 13 08:19:07.968: INFO: Waiting for StatefulSet statefulset-1250/ss2 to complete update Jan 13 08:19:07.969: INFO: Waiting for Pod statefulset-1250/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 13 08:19:07.969: INFO: Waiting for Pod statefulset-1250/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 13 08:19:07.969: INFO: Waiting for Pod statefulset-1250/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 13 08:19:17.985: INFO: Waiting for StatefulSet statefulset-1250/ss2 to complete update Jan 13 08:19:17.985: INFO: Waiting for Pod statefulset-1250/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 13 08:19:17.985: INFO: Waiting for Pod statefulset-1250/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 13 08:19:17.985: INFO: Waiting for Pod statefulset-1250/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 13 08:19:27.985: INFO: Waiting for StatefulSet statefulset-1250/ss2 to complete update Jan 13 08:19:27.985: INFO: Waiting for Pod statefulset-1250/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 13 08:19:27.985: INFO: Waiting for Pod statefulset-1250/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 13 08:19:27.985: INFO: Waiting for Pod statefulset-1250/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 13 08:19:37.986: INFO: Waiting for StatefulSet statefulset-1250/ss2 to complete update Jan 13 08:19:37.986: INFO: Waiting for Pod statefulset-1250/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 13 08:19:37.986: INFO: Waiting for Pod statefulset-1250/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 13 08:19:37.986: INFO: Waiting for Pod statefulset-1250/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 13 08:19:47.987: INFO: Waiting for StatefulSet statefulset-1250/ss2 to complete update Jan 13 08:19:47.988: INFO: Waiting for Pod statefulset-1250/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 13 08:19:47.988: INFO: Waiting for Pod statefulset-1250/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 13 08:19:57.985: INFO: Waiting for StatefulSet statefulset-1250/ss2 to complete update Jan 13 08:19:57.985: INFO: Waiting for Pod statefulset-1250/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision Jan 13 08:20:07.986: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-1250 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 13 08:20:09.560: INFO: stderr: "I0113 08:20:09.418976 3776 log.go:181] (0x40001ba630) (0x40001b2280) Create stream\nI0113 08:20:09.421287 3776 log.go:181] (0x40001ba630) (0x40001b2280) Stream added, broadcasting: 1\nI0113 08:20:09.432998 3776 log.go:181] (0x40001ba630) Reply frame received for 1\nI0113 08:20:09.433827 3776 log.go:181] (0x40001ba630) (0x400064e000) Create stream\nI0113 08:20:09.433913 3776 log.go:181] (0x40001ba630) (0x400064e000) Stream added, broadcasting: 3\nI0113 08:20:09.435528 3776 log.go:181] (0x40001ba630) Reply frame received for 3\nI0113 08:20:09.435843 3776 log.go:181] (0x40001ba630) (0x400056e8c0) Create stream\nI0113 08:20:09.435921 3776 log.go:181] (0x40001ba630) (0x400056e8c0) Stream added, broadcasting: 5\nI0113 08:20:09.437474 3776 log.go:181] (0x40001ba630) Reply frame received for 5\nI0113 08:20:09.513936 3776 log.go:181] (0x40001ba630) Data frame received for 5\nI0113 08:20:09.514317 3776 log.go:181] (0x400056e8c0) (5) Data frame handling\nI0113 08:20:09.515028 3776 log.go:181] (0x400056e8c0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0113 08:20:09.540415 3776 log.go:181] (0x40001ba630) Data frame received for 3\nI0113 08:20:09.540590 3776 log.go:181] (0x400064e000) (3) Data frame handling\nI0113 08:20:09.540706 3776 log.go:181] (0x40001ba630) Data frame received for 5\nI0113 08:20:09.540921 3776 log.go:181] (0x400056e8c0) (5) Data frame handling\nI0113 08:20:09.541146 3776 log.go:181] (0x400064e000) (3) Data frame sent\nI0113 08:20:09.541221 3776 log.go:181] (0x40001ba630) Data frame received for 3\nI0113 08:20:09.541273 3776 log.go:181] (0x400064e000) (3) Data frame handling\nI0113 08:20:09.542078 3776 log.go:181] (0x40001ba630) Data frame received for 1\nI0113 08:20:09.542182 3776 log.go:181] (0x40001b2280) (1) Data frame handling\nI0113 08:20:09.542282 3776 log.go:181] (0x40001b2280) (1) Data frame sent\nI0113 08:20:09.544477 3776 log.go:181] (0x40001ba630) (0x40001b2280) Stream removed, broadcasting: 1\nI0113 08:20:09.546651 3776 log.go:181] (0x40001ba630) Go away received\nI0113 08:20:09.551241 3776 log.go:181] (0x40001ba630) (0x40001b2280) Stream removed, broadcasting: 1\nI0113 08:20:09.551652 3776 log.go:181] (0x40001ba630) (0x400064e000) Stream removed, broadcasting: 3\nI0113 08:20:09.551915 3776 log.go:181] (0x40001ba630) (0x400056e8c0) Stream removed, broadcasting: 5\n" Jan 13 08:20:09.560: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 13 08:20:09.561: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 13 08:20:19.613: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Jan 13 08:20:29.669: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-1250 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 08:20:31.272: INFO: stderr: "I0113 08:20:31.155891 3796 log.go:181] (0x4000b54000) (0x4000c34000) Create stream\nI0113 08:20:31.161345 3796 log.go:181] (0x4000b54000) (0x4000c34000) Stream added, broadcasting: 1\nI0113 08:20:31.174456 3796 log.go:181] (0x4000b54000) Reply frame received for 1\nI0113 08:20:31.175323 3796 log.go:181] (0x4000b54000) (0x4000c340a0) Create stream\nI0113 08:20:31.175412 3796 log.go:181] (0x4000b54000) (0x4000c340a0) Stream added, broadcasting: 3\nI0113 08:20:31.176649 3796 log.go:181] (0x4000b54000) Reply frame received for 3\nI0113 08:20:31.177011 3796 log.go:181] (0x4000b54000) (0x4000397900) Create stream\nI0113 08:20:31.177089 3796 log.go:181] (0x4000b54000) (0x4000397900) Stream added, broadcasting: 5\nI0113 08:20:31.178328 3796 log.go:181] (0x4000b54000) Reply frame received for 5\nI0113 08:20:31.252458 3796 log.go:181] (0x4000b54000) Data frame received for 5\nI0113 08:20:31.252763 3796 log.go:181] (0x4000b54000) Data frame received for 3\nI0113 08:20:31.253096 3796 log.go:181] (0x4000b54000) Data frame received for 1\nI0113 08:20:31.253414 3796 log.go:181] (0x4000c340a0) (3) Data frame handling\nI0113 08:20:31.253530 3796 log.go:181] (0x4000c34000) (1) Data frame handling\nI0113 08:20:31.254093 3796 log.go:181] (0x4000397900) (5) Data frame handling\nI0113 08:20:31.255727 3796 log.go:181] (0x4000c34000) (1) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0113 08:20:31.256183 3796 log.go:181] (0x4000397900) (5) Data frame sent\nI0113 08:20:31.256659 3796 log.go:181] (0x4000b54000) Data frame received for 5\nI0113 08:20:31.256930 3796 log.go:181] (0x4000397900) (5) Data frame handling\nI0113 08:20:31.257257 3796 log.go:181] (0x4000c340a0) (3) Data frame sent\nI0113 08:20:31.257331 3796 log.go:181] (0x4000b54000) Data frame received for 3\nI0113 08:20:31.257826 3796 log.go:181] (0x4000b54000) (0x4000c34000) Stream removed, broadcasting: 1\nI0113 08:20:31.258626 3796 log.go:181] (0x4000c340a0) (3) Data frame handling\nI0113 08:20:31.260473 3796 log.go:181] (0x4000b54000) Go away received\nI0113 08:20:31.263877 3796 log.go:181] (0x4000b54000) (0x4000c34000) Stream removed, broadcasting: 1\nI0113 08:20:31.264178 3796 log.go:181] (0x4000b54000) (0x4000c340a0) Stream removed, broadcasting: 3\nI0113 08:20:31.264485 3796 log.go:181] (0x4000b54000) (0x4000397900) Stream removed, broadcasting: 5\n" Jan 13 08:20:31.272: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 13 08:20:31.272: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 13 08:20:41.312: INFO: Waiting for StatefulSet statefulset-1250/ss2 to complete update Jan 13 08:20:41.312: INFO: Waiting for Pod statefulset-1250/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 13 08:20:41.312: INFO: Waiting for Pod statefulset-1250/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 13 08:20:41.313: INFO: Waiting for Pod statefulset-1250/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 13 08:20:51.327: INFO: Waiting for StatefulSet statefulset-1250/ss2 to complete update Jan 13 08:20:51.327: INFO: Waiting for Pod statefulset-1250/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 13 08:20:51.327: INFO: Waiting for Pod statefulset-1250/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 13 08:21:01.329: INFO: Waiting for StatefulSet statefulset-1250/ss2 to complete update Jan 13 08:21:01.329: INFO: Waiting for Pod statefulset-1250/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 13 08:21:01.329: INFO: Waiting for Pod statefulset-1250/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 13 08:21:11.330: INFO: Waiting for StatefulSet statefulset-1250/ss2 to complete update Jan 13 08:21:11.330: INFO: Waiting for Pod statefulset-1250/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 13 08:21:11.330: INFO: Waiting for Pod statefulset-1250/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 13 08:21:21.332: INFO: Waiting for StatefulSet statefulset-1250/ss2 to complete update Jan 13 08:21:21.332: INFO: Waiting for Pod statefulset-1250/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 13 08:21:21.332: INFO: Waiting for Pod statefulset-1250/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 13 08:21:31.330: INFO: Waiting for StatefulSet statefulset-1250/ss2 to complete update Jan 13 08:21:31.331: INFO: Waiting for Pod statefulset-1250/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 13 08:21:31.331: INFO: Waiting for Pod statefulset-1250/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 13 08:21:41.329: INFO: Waiting for StatefulSet statefulset-1250/ss2 to complete update Jan 13 08:21:41.330: INFO: Waiting for Pod statefulset-1250/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 13 08:21:41.330: INFO: Waiting for Pod statefulset-1250/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 13 08:21:51.326: INFO: Waiting for StatefulSet statefulset-1250/ss2 to complete update Jan 13 08:21:51.326: INFO: Waiting for Pod statefulset-1250/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 13 08:22:01.329: INFO: Waiting for StatefulSet statefulset-1250/ss2 to complete update Jan 13 08:22:01.329: INFO: Waiting for Pod statefulset-1250/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 13 08:22:11.328: INFO: Waiting for StatefulSet statefulset-1250/ss2 to complete update Jan 13 08:22:11.329: INFO: Waiting for Pod statefulset-1250/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 13 08:22:21.339: INFO: Waiting for StatefulSet statefulset-1250/ss2 to complete update Jan 13 08:22:21.340: INFO: Waiting for Pod statefulset-1250/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 13 08:22:31.327: INFO: Waiting for StatefulSet statefulset-1250/ss2 to complete update Jan 13 08:22:31.327: INFO: Waiting for Pod statefulset-1250/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 13 08:22:41.327: INFO: Waiting for StatefulSet statefulset-1250/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jan 13 08:22:51.329: INFO: Deleting all statefulset in ns statefulset-1250 Jan 13 08:22:51.334: INFO: Scaling statefulset ss2 to 0 Jan 13 08:24:41.383: INFO: Waiting for statefulset status.replicas updated to 0 Jan 13 08:24:41.388: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:24:41.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1250" for this suite. • [SLOW TEST:387.146 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":309,"completed":275,"skipped":4896,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:24:41.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:24:41.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8834" for this suite. STEP: Destroying namespace "nspatchtest-64593814-05ea-4c98-a9c3-5bc3dc331c9b-9548" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":309,"completed":276,"skipped":4938,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:24:41.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir volume type on node default medium Jan 13 08:24:41.799: INFO: Waiting up to 5m0s for pod "pod-4dd81225-1375-40f6-8ad5-dc12dbbfec68" in namespace "emptydir-2775" to be "Succeeded or Failed" Jan 13 08:24:41.837: INFO: Pod "pod-4dd81225-1375-40f6-8ad5-dc12dbbfec68": Phase="Pending", Reason="", readiness=false. Elapsed: 37.676539ms Jan 13 08:24:43.845: INFO: Pod "pod-4dd81225-1375-40f6-8ad5-dc12dbbfec68": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045480011s Jan 13 08:24:45.852: INFO: Pod "pod-4dd81225-1375-40f6-8ad5-dc12dbbfec68": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053202796s STEP: Saw pod success Jan 13 08:24:45.853: INFO: Pod "pod-4dd81225-1375-40f6-8ad5-dc12dbbfec68" satisfied condition "Succeeded or Failed" Jan 13 08:24:45.857: INFO: Trying to get logs from node leguer-worker pod pod-4dd81225-1375-40f6-8ad5-dc12dbbfec68 container test-container: STEP: delete the pod Jan 13 08:24:45.935: INFO: Waiting for pod pod-4dd81225-1375-40f6-8ad5-dc12dbbfec68 to disappear Jan 13 08:24:45.956: INFO: Pod pod-4dd81225-1375-40f6-8ad5-dc12dbbfec68 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:24:45.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2775" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":277,"skipped":4947,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:24:45.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8670 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8670;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8670 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8670;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8670.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8670.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8670.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8670.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8670.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8670.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8670.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8670.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8670.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8670.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8670.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8670.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8670.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 146.84.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.84.146_udp@PTR;check="$$(dig +tcp +noall +answer +search 146.84.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.84.146_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8670 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8670;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8670 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8670;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8670.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8670.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8670.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8670.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8670.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8670.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8670.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8670.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8670.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8670.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8670.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8670.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8670.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 146.84.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.84.146_udp@PTR;check="$$(dig +tcp +noall +answer +search 146.84.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.84.146_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 13 08:24:54.214: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:24:54.219: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:24:54.223: INFO: Unable to read wheezy_udp@dns-test-service.dns-8670 from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:24:54.228: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8670 from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:24:54.232: INFO: Unable to read wheezy_udp@dns-test-service.dns-8670.svc from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:24:54.236: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8670.svc from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:24:54.240: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8670.svc from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:24:54.244: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8670.svc from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:24:54.271: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:24:54.275: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:24:54.279: INFO: Unable to read jessie_udp@dns-test-service.dns-8670 from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:24:54.283: INFO: Unable to read jessie_tcp@dns-test-service.dns-8670 from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:24:54.288: INFO: Unable to read jessie_udp@dns-test-service.dns-8670.svc from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:24:54.291: INFO: Unable to read jessie_tcp@dns-test-service.dns-8670.svc from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:24:54.296: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8670.svc from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:24:54.305: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8670.svc from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:24:54.325: INFO: Lookups using dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8670 wheezy_tcp@dns-test-service.dns-8670 wheezy_udp@dns-test-service.dns-8670.svc wheezy_tcp@dns-test-service.dns-8670.svc wheezy_udp@_http._tcp.dns-test-service.dns-8670.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8670.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8670 jessie_tcp@dns-test-service.dns-8670 jessie_udp@dns-test-service.dns-8670.svc jessie_tcp@dns-test-service.dns-8670.svc jessie_udp@_http._tcp.dns-test-service.dns-8670.svc jessie_tcp@_http._tcp.dns-test-service.dns-8670.svc] Jan 13 08:24:59.333: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:24:59.339: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:24:59.343: INFO: Unable to read wheezy_udp@dns-test-service.dns-8670 from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:24:59.347: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8670 from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:24:59.352: INFO: Unable to read wheezy_udp@dns-test-service.dns-8670.svc from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:24:59.356: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8670.svc from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:24:59.359: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8670.svc from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:24:59.383: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8670.svc from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:24:59.444: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:24:59.449: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:24:59.454: INFO: Unable to read jessie_udp@dns-test-service.dns-8670 from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:24:59.458: INFO: Unable to read jessie_tcp@dns-test-service.dns-8670 from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:24:59.463: INFO: Unable to read jessie_udp@dns-test-service.dns-8670.svc from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:24:59.467: INFO: Unable to read jessie_tcp@dns-test-service.dns-8670.svc from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:24:59.472: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8670.svc from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:24:59.477: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8670.svc from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:24:59.504: INFO: Lookups using dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8670 wheezy_tcp@dns-test-service.dns-8670 wheezy_udp@dns-test-service.dns-8670.svc wheezy_tcp@dns-test-service.dns-8670.svc wheezy_udp@_http._tcp.dns-test-service.dns-8670.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8670.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8670 jessie_tcp@dns-test-service.dns-8670 jessie_udp@dns-test-service.dns-8670.svc jessie_tcp@dns-test-service.dns-8670.svc jessie_udp@_http._tcp.dns-test-service.dns-8670.svc jessie_tcp@_http._tcp.dns-test-service.dns-8670.svc] Jan 13 08:25:04.333: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:25:04.339: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:25:04.343: INFO: Unable to read wheezy_udp@dns-test-service.dns-8670 from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:25:04.347: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8670 from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:25:04.352: INFO: Unable to read wheezy_udp@dns-test-service.dns-8670.svc from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:25:04.357: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8670.svc from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:25:04.362: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8670.svc from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:25:04.366: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8670.svc from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:25:04.395: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:25:04.399: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:25:04.403: INFO: Unable to read jessie_udp@dns-test-service.dns-8670 from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:25:04.408: INFO: Unable to read jessie_tcp@dns-test-service.dns-8670 from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:25:04.412: INFO: Unable to read jessie_udp@dns-test-service.dns-8670.svc from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:25:04.417: INFO: Unable to read jessie_tcp@dns-test-service.dns-8670.svc from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:25:04.422: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8670.svc from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:25:04.426: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8670.svc from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:25:04.453: INFO: Lookups using dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8670 wheezy_tcp@dns-test-service.dns-8670 wheezy_udp@dns-test-service.dns-8670.svc wheezy_tcp@dns-test-service.dns-8670.svc wheezy_udp@_http._tcp.dns-test-service.dns-8670.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8670.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8670 jessie_tcp@dns-test-service.dns-8670 jessie_udp@dns-test-service.dns-8670.svc jessie_tcp@dns-test-service.dns-8670.svc jessie_udp@_http._tcp.dns-test-service.dns-8670.svc jessie_tcp@_http._tcp.dns-test-service.dns-8670.svc] Jan 13 08:25:09.334: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:25:09.341: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:25:09.346: INFO: Unable to read wheezy_udp@dns-test-service.dns-8670 from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:25:09.350: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8670 from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:25:09.354: INFO: Unable to read wheezy_udp@dns-test-service.dns-8670.svc from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:25:09.357: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8670.svc from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:25:09.362: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8670.svc from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:25:09.366: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8670.svc from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:25:09.418: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:25:09.423: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:25:09.427: INFO: Unable to read jessie_udp@dns-test-service.dns-8670 from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:25:09.431: INFO: Unable to read jessie_tcp@dns-test-service.dns-8670 from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:25:09.435: INFO: Unable to read jessie_udp@dns-test-service.dns-8670.svc from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:25:09.440: INFO: Unable to read jessie_tcp@dns-test-service.dns-8670.svc from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:25:09.444: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8670.svc from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:25:09.448: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8670.svc from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:25:09.474: INFO: Lookups using dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8670 wheezy_tcp@dns-test-service.dns-8670 wheezy_udp@dns-test-service.dns-8670.svc wheezy_tcp@dns-test-service.dns-8670.svc wheezy_udp@_http._tcp.dns-test-service.dns-8670.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8670.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8670 jessie_tcp@dns-test-service.dns-8670 jessie_udp@dns-test-service.dns-8670.svc jessie_tcp@dns-test-service.dns-8670.svc jessie_udp@_http._tcp.dns-test-service.dns-8670.svc jessie_tcp@_http._tcp.dns-test-service.dns-8670.svc] Jan 13 08:25:14.333: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:25:14.337: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:25:14.341: INFO: Unable to read wheezy_udp@dns-test-service.dns-8670 from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:25:14.345: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8670 from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:25:14.349: INFO: Unable to read wheezy_udp@dns-test-service.dns-8670.svc from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:25:14.354: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8670.svc from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:25:14.358: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8670.svc from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:25:14.363: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8670.svc from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:25:14.389: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:25:14.394: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:25:14.398: INFO: Unable to read jessie_udp@dns-test-service.dns-8670 from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:25:14.402: INFO: Unable to read jessie_tcp@dns-test-service.dns-8670 from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:25:14.406: INFO: Unable to read jessie_udp@dns-test-service.dns-8670.svc from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:25:14.411: INFO: Unable to read jessie_tcp@dns-test-service.dns-8670.svc from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:25:14.415: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8670.svc from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:25:14.419: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8670.svc from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:25:14.442: INFO: Lookups using dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8670 wheezy_tcp@dns-test-service.dns-8670 wheezy_udp@dns-test-service.dns-8670.svc wheezy_tcp@dns-test-service.dns-8670.svc wheezy_udp@_http._tcp.dns-test-service.dns-8670.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8670.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8670 jessie_tcp@dns-test-service.dns-8670 jessie_udp@dns-test-service.dns-8670.svc jessie_tcp@dns-test-service.dns-8670.svc jessie_udp@_http._tcp.dns-test-service.dns-8670.svc jessie_tcp@_http._tcp.dns-test-service.dns-8670.svc] Jan 13 08:25:19.334: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:25:19.339: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:25:19.344: INFO: Unable to read wheezy_udp@dns-test-service.dns-8670 from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:25:19.348: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8670 from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:25:19.353: INFO: Unable to read wheezy_udp@dns-test-service.dns-8670.svc from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:25:19.357: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8670.svc from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:25:19.362: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8670.svc from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:25:19.366: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8670.svc from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:25:19.404: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:25:19.408: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:25:19.426: INFO: Unable to read jessie_udp@dns-test-service.dns-8670 from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:25:19.431: INFO: Unable to read jessie_tcp@dns-test-service.dns-8670 from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:25:19.436: INFO: Unable to read jessie_udp@dns-test-service.dns-8670.svc from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:25:19.440: INFO: Unable to read jessie_tcp@dns-test-service.dns-8670.svc from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:25:19.445: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8670.svc from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:25:19.450: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8670.svc from pod dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a: the server could not find the requested resource (get pods dns-test-824a6b22-b371-48df-b762-4e2ac698a89a) Jan 13 08:25:19.478: INFO: Lookups using dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8670 wheezy_tcp@dns-test-service.dns-8670 wheezy_udp@dns-test-service.dns-8670.svc wheezy_tcp@dns-test-service.dns-8670.svc wheezy_udp@_http._tcp.dns-test-service.dns-8670.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8670.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8670 jessie_tcp@dns-test-service.dns-8670 jessie_udp@dns-test-service.dns-8670.svc jessie_tcp@dns-test-service.dns-8670.svc jessie_udp@_http._tcp.dns-test-service.dns-8670.svc jessie_tcp@_http._tcp.dns-test-service.dns-8670.svc] Jan 13 08:25:24.470: INFO: DNS probes using dns-8670/dns-test-824a6b22-b371-48df-b762-4e2ac698a89a succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:25:25.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8670" for this suite. • [SLOW TEST:39.257 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":309,"completed":278,"skipped":4984,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:25:25.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-1832, will wait for the garbage collector to delete the pods Jan 13 08:25:31.388: INFO: Deleting Job.batch foo took: 7.362125ms Jan 13 08:25:31.489: INFO: Terminating Job.batch foo pods took: 100.905302ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:26:50.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1832" for this suite. • [SLOW TEST:84.984 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":309,"completed":279,"skipped":4993,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:26:50.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:27:18.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6030" for this suite. • [SLOW TEST:28.211 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":309,"completed":280,"skipped":4999,"failed":0} SSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:27:18.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating service in namespace services-1339 Jan 13 08:27:22.546: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-1339 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Jan 13 08:27:27.337: INFO: stderr: "I0113 08:27:27.194670 3816 log.go:181] (0x400003a0b0) (0x4000b92000) Create stream\nI0113 08:27:27.200268 3816 log.go:181] (0x400003a0b0) (0x4000b92000) Stream added, broadcasting: 1\nI0113 08:27:27.209634 3816 log.go:181] (0x400003a0b0) Reply frame received for 1\nI0113 08:27:27.210228 3816 log.go:181] (0x400003a0b0) (0x4000960500) Create stream\nI0113 08:27:27.210304 3816 log.go:181] (0x400003a0b0) (0x4000960500) Stream added, broadcasting: 3\nI0113 08:27:27.211439 3816 log.go:181] (0x400003a0b0) Reply frame received for 3\nI0113 08:27:27.211732 3816 log.go:181] (0x400003a0b0) (0x4000b920a0) Create stream\nI0113 08:27:27.211792 3816 log.go:181] (0x400003a0b0) (0x4000b920a0) Stream added, broadcasting: 5\nI0113 08:27:27.212992 3816 log.go:181] (0x400003a0b0) Reply frame received for 5\nI0113 08:27:27.312298 3816 log.go:181] (0x400003a0b0) Data frame received for 5\nI0113 08:27:27.312723 3816 log.go:181] (0x4000b920a0) (5) Data frame handling\nI0113 08:27:27.313418 3816 log.go:181] (0x4000b920a0) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0113 08:27:27.316602 3816 log.go:181] (0x400003a0b0) Data frame received for 3\nI0113 08:27:27.316786 3816 log.go:181] (0x4000960500) (3) Data frame handling\nI0113 08:27:27.317103 3816 log.go:181] (0x4000960500) (3) Data frame sent\nI0113 08:27:27.317285 3816 log.go:181] (0x400003a0b0) Data frame received for 3\nI0113 08:27:27.317527 3816 log.go:181] (0x4000960500) (3) Data frame handling\nI0113 08:27:27.317768 3816 log.go:181] (0x400003a0b0) Data frame received for 5\nI0113 08:27:27.317873 3816 log.go:181] (0x4000b920a0) (5) Data frame handling\nI0113 08:27:27.319368 3816 log.go:181] (0x400003a0b0) Data frame received for 1\nI0113 08:27:27.319533 3816 log.go:181] (0x4000b92000) (1) Data frame handling\nI0113 08:27:27.319721 3816 log.go:181] (0x4000b92000) (1) Data frame sent\nI0113 08:27:27.321260 3816 log.go:181] (0x400003a0b0) (0x4000b92000) Stream removed, broadcasting: 1\nI0113 08:27:27.323424 3816 log.go:181] (0x400003a0b0) Go away received\nI0113 08:27:27.327945 3816 log.go:181] (0x400003a0b0) (0x4000b92000) Stream removed, broadcasting: 1\nI0113 08:27:27.328586 3816 log.go:181] (0x400003a0b0) (0x4000960500) Stream removed, broadcasting: 3\nI0113 08:27:27.328809 3816 log.go:181] (0x400003a0b0) (0x4000b920a0) Stream removed, broadcasting: 5\n" Jan 13 08:27:27.337: INFO: stdout: "iptables" Jan 13 08:27:27.337: INFO: proxyMode: iptables Jan 13 08:27:27.385: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jan 13 08:27:27.390: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-1339 STEP: creating replication controller affinity-clusterip-timeout in namespace services-1339 I0113 08:27:27.745309 10 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-1339, replica count: 3 I0113 08:27:30.797072 10 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0113 08:27:33.797818 10 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 13 08:27:33.812: INFO: Creating new exec pod Jan 13 08:27:38.841: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-1339 exec execpod-affinitytd4mv -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Jan 13 08:27:40.446: INFO: stderr: "I0113 08:27:40.302693 3837 log.go:181] (0x40007f80b0) (0x4000d98000) Create stream\nI0113 08:27:40.306158 3837 log.go:181] (0x40007f80b0) (0x4000d98000) Stream added, broadcasting: 1\nI0113 08:27:40.317152 3837 log.go:181] (0x40007f80b0) Reply frame received for 1\nI0113 08:27:40.318063 3837 log.go:181] (0x40007f80b0) (0x4000d980a0) Create stream\nI0113 08:27:40.318148 3837 log.go:181] (0x40007f80b0) (0x4000d980a0) Stream added, broadcasting: 3\nI0113 08:27:40.319913 3837 log.go:181] (0x40007f80b0) Reply frame received for 3\nI0113 08:27:40.320253 3837 log.go:181] (0x40007f80b0) (0x4000386d20) Create stream\nI0113 08:27:40.320340 3837 log.go:181] (0x40007f80b0) (0x4000386d20) Stream added, broadcasting: 5\nI0113 08:27:40.321824 3837 log.go:181] (0x40007f80b0) Reply frame received for 5\nI0113 08:27:40.424685 3837 log.go:181] (0x40007f80b0) Data frame received for 5\nI0113 08:27:40.425361 3837 log.go:181] (0x40007f80b0) Data frame received for 3\nI0113 08:27:40.425559 3837 log.go:181] (0x4000386d20) (5) Data frame handling\nI0113 08:27:40.425682 3837 log.go:181] (0x40007f80b0) Data frame received for 1\nI0113 08:27:40.425786 3837 log.go:181] (0x4000d98000) (1) Data frame handling\nI0113 08:27:40.426140 3837 log.go:181] (0x4000d980a0) (3) Data frame handling\nI0113 08:27:40.428174 3837 log.go:181] (0x4000d98000) (1) Data frame sent\nI0113 08:27:40.428793 3837 log.go:181] (0x4000386d20) (5) Data frame sent\nI0113 08:27:40.429765 3837 log.go:181] (0x40007f80b0) Data frame received for 5\nI0113 08:27:40.429883 3837 log.go:181] (0x4000386d20) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-clusterip-timeout 80\nI0113 08:27:40.431191 3837 log.go:181] (0x40007f80b0) (0x4000d98000) Stream removed, broadcasting: 1\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\nI0113 08:27:40.432801 3837 log.go:181] (0x4000386d20) (5) Data frame sent\nI0113 08:27:40.432930 3837 log.go:181] (0x40007f80b0) Data frame received for 5\nI0113 08:27:40.433010 3837 log.go:181] (0x4000386d20) (5) Data frame handling\nI0113 08:27:40.433276 3837 log.go:181] (0x40007f80b0) Go away received\nI0113 08:27:40.436361 3837 log.go:181] (0x40007f80b0) (0x4000d98000) Stream removed, broadcasting: 1\nI0113 08:27:40.436996 3837 log.go:181] (0x40007f80b0) (0x4000d980a0) Stream removed, broadcasting: 3\nI0113 08:27:40.437453 3837 log.go:181] (0x40007f80b0) (0x4000386d20) Stream removed, broadcasting: 5\n" Jan 13 08:27:40.447: INFO: stdout: "" Jan 13 08:27:40.454: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-1339 exec execpod-affinitytd4mv -- /bin/sh -x -c nc -zv -t -w 2 10.96.6.129 80' Jan 13 08:27:42.047: INFO: stderr: "I0113 08:27:41.910351 3857 log.go:181] (0x40002de000) (0x4000c161e0) Create stream\nI0113 08:27:41.913029 3857 log.go:181] (0x40002de000) (0x4000c161e0) Stream added, broadcasting: 1\nI0113 08:27:41.922062 3857 log.go:181] (0x40002de000) Reply frame received for 1\nI0113 08:27:41.922575 3857 log.go:181] (0x40002de000) (0x4000c16280) Create stream\nI0113 08:27:41.922631 3857 log.go:181] (0x40002de000) (0x4000c16280) Stream added, broadcasting: 3\nI0113 08:27:41.923926 3857 log.go:181] (0x40002de000) Reply frame received for 3\nI0113 08:27:41.924298 3857 log.go:181] (0x40002de000) (0x40006aa000) Create stream\nI0113 08:27:41.924359 3857 log.go:181] (0x40002de000) (0x40006aa000) Stream added, broadcasting: 5\nI0113 08:27:41.925708 3857 log.go:181] (0x40002de000) Reply frame received for 5\nI0113 08:27:42.025352 3857 log.go:181] (0x40002de000) Data frame received for 5\nI0113 08:27:42.025808 3857 log.go:181] (0x40006aa000) (5) Data frame handling\nI0113 08:27:42.026185 3857 log.go:181] (0x40002de000) Data frame received for 3\nI0113 08:27:42.026288 3857 log.go:181] (0x4000c16280) (3) Data frame handling\nI0113 08:27:42.026687 3857 log.go:181] (0x40002de000) Data frame received for 1\nI0113 08:27:42.026812 3857 log.go:181] (0x4000c161e0) (1) Data frame handling\n+ nc -zv -t -w 2 10.96.6.129 80\nConnection to 10.96.6.129 80 port [tcp/http] succeeded!\nI0113 08:27:42.028128 3857 log.go:181] (0x4000c161e0) (1) Data frame sent\nI0113 08:27:42.030924 3857 log.go:181] (0x40006aa000) (5) Data frame sent\nI0113 08:27:42.031090 3857 log.go:181] (0x40002de000) Data frame received for 5\nI0113 08:27:42.031161 3857 log.go:181] (0x40006aa000) (5) Data frame handling\nI0113 08:27:42.033498 3857 log.go:181] (0x40002de000) (0x4000c161e0) Stream removed, broadcasting: 1\nI0113 08:27:42.034193 3857 log.go:181] (0x40002de000) Go away received\nI0113 08:27:42.038745 3857 log.go:181] (0x40002de000) (0x4000c161e0) Stream removed, broadcasting: 1\nI0113 08:27:42.039231 3857 log.go:181] (0x40002de000) (0x4000c16280) Stream removed, broadcasting: 3\nI0113 08:27:42.039507 3857 log.go:181] (0x40002de000) (0x40006aa000) Stream removed, broadcasting: 5\n" Jan 13 08:27:42.048: INFO: stdout: "" Jan 13 08:27:42.048: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-1339 exec execpod-affinitytd4mv -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.96.6.129:80/ ; done' Jan 13 08:27:43.749: INFO: stderr: "I0113 08:27:43.533159 3878 log.go:181] (0x400003a420) (0x40000c3680) Create stream\nI0113 08:27:43.536929 3878 log.go:181] (0x400003a420) (0x40000c3680) Stream added, broadcasting: 1\nI0113 08:27:43.549357 3878 log.go:181] (0x400003a420) Reply frame received for 1\nI0113 08:27:43.550200 3878 log.go:181] (0x400003a420) (0x40006b0000) Create stream\nI0113 08:27:43.550266 3878 log.go:181] (0x400003a420) (0x40006b0000) Stream added, broadcasting: 3\nI0113 08:27:43.551775 3878 log.go:181] (0x400003a420) Reply frame received for 3\nI0113 08:27:43.552255 3878 log.go:181] (0x400003a420) (0x40003105a0) Create stream\nI0113 08:27:43.552357 3878 log.go:181] (0x400003a420) (0x40003105a0) Stream added, broadcasting: 5\nI0113 08:27:43.554030 3878 log.go:181] (0x400003a420) Reply frame received for 5\nI0113 08:27:43.634095 3878 log.go:181] (0x400003a420) Data frame received for 3\nI0113 08:27:43.634556 3878 log.go:181] (0x400003a420) Data frame received for 5\nI0113 08:27:43.634674 3878 log.go:181] (0x40003105a0) (5) Data frame handling\nI0113 08:27:43.634787 3878 log.go:181] (0x40006b0000) (3) Data frame handling\nI0113 08:27:43.635538 3878 log.go:181] (0x40003105a0) (5) Data frame sent\nI0113 08:27:43.635822 3878 log.go:181] (0x40006b0000) (3) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.6.129:80/\nI0113 08:27:43.638844 3878 log.go:181] (0x400003a420) Data frame received for 3\nI0113 08:27:43.638943 3878 log.go:181] (0x40006b0000) (3) Data frame handling\nI0113 08:27:43.639075 3878 log.go:181] (0x40006b0000) (3) Data frame sent\nI0113 08:27:43.639580 3878 log.go:181] (0x400003a420) Data frame received for 5\nI0113 08:27:43.639721 3878 log.go:181] (0x40003105a0) (5) Data frame handling\nI0113 08:27:43.639821 3878 log.go:181] (0x40003105a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.6.129:80/\nI0113 08:27:43.639904 3878 log.go:181] (0x400003a420) Data frame received for 3\nI0113 08:27:43.640007 3878 log.go:181] (0x40006b0000) (3) Data frame handling\nI0113 08:27:43.640107 3878 log.go:181] (0x40006b0000) (3) Data frame sent\nI0113 08:27:43.643645 3878 log.go:181] (0x400003a420) Data frame received for 3\nI0113 08:27:43.643751 3878 log.go:181] (0x40006b0000) (3) Data frame handling\nI0113 08:27:43.643864 3878 log.go:181] (0x40006b0000) (3) Data frame sent\nI0113 08:27:43.644068 3878 log.go:181] (0x400003a420) Data frame received for 5\nI0113 08:27:43.644242 3878 log.go:181] (0x40003105a0) (5) Data frame handling\nI0113 08:27:43.644415 3878 log.go:181] (0x40003105a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.6.129:80/\nI0113 08:27:43.644568 3878 log.go:181] (0x400003a420) Data frame received for 3\nI0113 08:27:43.644706 3878 log.go:181] (0x40006b0000) (3) Data frame handling\nI0113 08:27:43.644978 3878 log.go:181] (0x40006b0000) (3) Data frame sent\nI0113 08:27:43.650324 3878 log.go:181] (0x400003a420) Data frame received for 3\nI0113 08:27:43.650419 3878 log.go:181] (0x40006b0000) (3) Data frame handling\nI0113 08:27:43.650536 3878 log.go:181] (0x40006b0000) (3) Data frame sent\nI0113 08:27:43.651517 3878 log.go:181] (0x400003a420) Data frame received for 3\nI0113 08:27:43.651716 3878 log.go:181] (0x40006b0000) (3) Data frame handling\nI0113 08:27:43.651857 3878 log.go:181] (0x400003a420) Data frame received for 5\nI0113 08:27:43.651981 3878 log.go:181] (0x40003105a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.6.129:80/\nI0113 08:27:43.652092 3878 log.go:181] (0x40006b0000) (3) Data frame sent\nI0113 08:27:43.652281 3878 log.go:181] (0x40003105a0) (5) Data frame sent\nI0113 08:27:43.655904 3878 log.go:181] (0x400003a420) Data frame received for 3\nI0113 08:27:43.656025 3878 log.go:181] (0x40006b0000) (3) Data frame handling\nI0113 08:27:43.656135 3878 log.go:181] (0x40006b0000) (3) Data frame sent\nI0113 08:27:43.656463 3878 log.go:181] (0x400003a420) Data frame received for 5\nI0113 08:27:43.656592 3878 log.go:181] (0x40003105a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.6.129:80/\nI0113 08:27:43.656691 3878 log.go:181] (0x400003a420) Data frame received for 3\nI0113 08:27:43.656810 3878 log.go:181] (0x40006b0000) (3) Data frame handling\nI0113 08:27:43.657020 3878 log.go:181] (0x40006b0000) (3) Data frame sent\nI0113 08:27:43.657126 3878 log.go:181] (0x40003105a0) (5) Data frame sent\nI0113 08:27:43.661291 3878 log.go:181] (0x400003a420) Data frame received for 3\nI0113 08:27:43.661436 3878 log.go:181] (0x40006b0000) (3) Data frame handling\nI0113 08:27:43.661557 3878 log.go:181] (0x40006b0000) (3) Data frame sent\nI0113 08:27:43.661729 3878 log.go:181] (0x400003a420) Data frame received for 5\nI0113 08:27:43.661872 3878 log.go:181] (0x40003105a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.6.129:80/\nI0113 08:27:43.661971 3878 log.go:181] (0x400003a420) Data frame received for 3\nI0113 08:27:43.662133 3878 log.go:181] (0x40006b0000) (3) Data frame handling\nI0113 08:27:43.662265 3878 log.go:181] (0x40003105a0) (5) Data frame sent\nI0113 08:27:43.662340 3878 log.go:181] (0x40006b0000) (3) Data frame sent\nI0113 08:27:43.665865 3878 log.go:181] (0x400003a420) Data frame received for 3\nI0113 08:27:43.665984 3878 log.go:181] (0x40006b0000) (3) Data frame handling\nI0113 08:27:43.666091 3878 log.go:181] (0x40006b0000) (3) Data frame sent\nI0113 08:27:43.666442 3878 log.go:181] (0x400003a420) Data frame received for 5\nI0113 08:27:43.666626 3878 log.go:181] (0x40003105a0) (5) Data frame handling\nI0113 08:27:43.666807 3878 log.go:181] (0x40003105a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeoutI0113 08:27:43.666946 3878 log.go:181] (0x400003a420) Data frame received for 5\nI0113 08:27:43.667045 3878 log.go:181] (0x400003a420) Data frame received for 3\nI0113 08:27:43.667222 3878 log.go:181] (0x40006b0000) (3) Data frame handling\nI0113 08:27:43.667354 3878 log.go:181] (0x40006b0000) (3) Data frame sent\nI0113 08:27:43.667445 3878 log.go:181] (0x40003105a0) (5) Data frame handling\nI0113 08:27:43.667566 3878 log.go:181] (0x40003105a0) (5) Data frame sent\n 2 http://10.96.6.129:80/\nI0113 08:27:43.673143 3878 log.go:181] (0x400003a420) Data frame received for 3\nI0113 08:27:43.673262 3878 log.go:181] (0x40006b0000) (3) Data frame handling\nI0113 08:27:43.673420 3878 log.go:181] (0x40006b0000) (3) Data frame sent\nI0113 08:27:43.673947 3878 log.go:181] (0x400003a420) Data frame received for 3\nI0113 08:27:43.674071 3878 log.go:181] (0x400003a420) Data frame received for 5\nI0113 08:27:43.674221 3878 log.go:181] (0x40003105a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.6.129:80/\nI0113 08:27:43.674348 3878 log.go:181] (0x40006b0000) (3) Data frame handling\nI0113 08:27:43.674492 3878 log.go:181] (0x40003105a0) (5) Data frame sent\nI0113 08:27:43.674617 3878 log.go:181] (0x40006b0000) (3) Data frame sent\nI0113 08:27:43.678594 3878 log.go:181] (0x400003a420) Data frame received for 3\nI0113 08:27:43.678753 3878 log.go:181] (0x40006b0000) (3) Data frame handling\nI0113 08:27:43.678895 3878 log.go:181] (0x40006b0000) (3) Data frame sent\nI0113 08:27:43.679031 3878 log.go:181] (0x400003a420) Data frame received for 3\nI0113 08:27:43.679199 3878 log.go:181] (0x40006b0000) (3) Data frame handling\nI0113 08:27:43.679331 3878 log.go:181] (0x400003a420) Data frame received for 5\nI0113 08:27:43.679404 3878 log.go:181] (0x40003105a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.6.129:80/\nI0113 08:27:43.679504 3878 log.go:181] (0x40006b0000) (3) Data frame sent\nI0113 08:27:43.679634 3878 log.go:181] (0x40003105a0) (5) Data frame sent\nI0113 08:27:43.685746 3878 log.go:181] (0x400003a420) Data frame received for 3\nI0113 08:27:43.685888 3878 log.go:181] (0x40006b0000) (3) Data frame handling\nI0113 08:27:43.686047 3878 log.go:181] (0x40006b0000) (3) Data frame sent\nI0113 08:27:43.686213 3878 log.go:181] (0x400003a420) Data frame received for 5\nI0113 08:27:43.686310 3878 log.go:181] (0x40003105a0) (5) Data frame handling\nI0113 08:27:43.686389 3878 log.go:181] (0x40003105a0) (5) Data frame sent\nI0113 08:27:43.686461 3878 log.go:181] (0x400003a420) Data frame received for 3\nI0113 08:27:43.686739 3878 log.go:181] (0x40006b0000) (3) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.6.129:80/\nI0113 08:27:43.686840 3878 log.go:181] (0x40006b0000) (3) Data frame sent\nI0113 08:27:43.693176 3878 log.go:181] (0x400003a420) Data frame received for 3\nI0113 08:27:43.693280 3878 log.go:181] (0x40006b0000) (3) Data frame handling\nI0113 08:27:43.693379 3878 log.go:181] (0x40006b0000) (3) Data frame sent\nI0113 08:27:43.693873 3878 log.go:181] (0x400003a420) Data frame received for 3\nI0113 08:27:43.693977 3878 log.go:181] (0x40006b0000) (3) Data frame handling\nI0113 08:27:43.694065 3878 log.go:181] (0x40006b0000) (3) Data frame sent\nI0113 08:27:43.694178 3878 log.go:181] (0x400003a420) Data frame received for 5\nI0113 08:27:43.694256 3878 log.go:181] (0x40003105a0) (5) Data frame handling\nI0113 08:27:43.694386 3878 log.go:181] (0x40003105a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.6.129:80/\nI0113 08:27:43.699085 3878 log.go:181] (0x400003a420) Data frame received for 3\nI0113 08:27:43.699200 3878 log.go:181] (0x40006b0000) (3) Data frame handling\nI0113 08:27:43.699324 3878 log.go:181] (0x40006b0000) (3) Data frame sent\nI0113 08:27:43.699715 3878 log.go:181] (0x400003a420) Data frame received for 3\nI0113 08:27:43.699919 3878 log.go:181] (0x40006b0000) (3) Data frame handling\nI0113 08:27:43.700062 3878 log.go:181] (0x400003a420) Data frame received for 5\nI0113 08:27:43.700201 3878 log.go:181] (0x40003105a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.6.129:80/\nI0113 08:27:43.700304 3878 log.go:181] (0x40006b0000) (3) Data frame sent\nI0113 08:27:43.700414 3878 log.go:181] (0x40003105a0) (5) Data frame sent\nI0113 08:27:43.704980 3878 log.go:181] (0x400003a420) Data frame received for 3\nI0113 08:27:43.705120 3878 log.go:181] (0x40006b0000) (3) Data frame handling\nI0113 08:27:43.705255 3878 log.go:181] (0x40006b0000) (3) Data frame sent\nI0113 08:27:43.705375 3878 log.go:181] (0x400003a420) Data frame received for 3\nI0113 08:27:43.705493 3878 log.go:181] (0x40006b0000) (3) Data frame handling\nI0113 08:27:43.705636 3878 log.go:181] (0x400003a420) Data frame received for 5\nI0113 08:27:43.705799 3878 log.go:181] (0x40003105a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.6.129:80/\nI0113 08:27:43.705889 3878 log.go:181] (0x40006b0000) (3) Data frame sent\nI0113 08:27:43.706011 3878 log.go:181] (0x40003105a0) (5) Data frame sent\nI0113 08:27:43.709250 3878 log.go:181] (0x400003a420) Data frame received for 3\nI0113 08:27:43.709405 3878 log.go:181] (0x40006b0000) (3) Data frame handling\nI0113 08:27:43.709544 3878 log.go:181] (0x40006b0000) (3) Data frame sent\nI0113 08:27:43.709734 3878 log.go:181] (0x400003a420) Data frame received for 5\nI0113 08:27:43.709914 3878 log.go:181] (0x40003105a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.6.129:80/\nI0113 08:27:43.710031 3878 log.go:181] (0x400003a420) Data frame received for 3\nI0113 08:27:43.710201 3878 log.go:181] (0x40006b0000) (3) Data frame handling\nI0113 08:27:43.710348 3878 log.go:181] (0x40003105a0) (5) Data frame sent\nI0113 08:27:43.710465 3878 log.go:181] (0x40006b0000) (3) Data frame sent\nI0113 08:27:43.715748 3878 log.go:181] (0x400003a420) Data frame received for 3\nI0113 08:27:43.715882 3878 log.go:181] (0x40006b0000) (3) Data frame handling\nI0113 08:27:43.716011 3878 log.go:181] (0x40006b0000) (3) Data frame sent\nI0113 08:27:43.716127 3878 log.go:181] (0x400003a420) Data frame received for 3\nI0113 08:27:43.716235 3878 log.go:181] (0x40006b0000) (3) Data frame handling\nI0113 08:27:43.716366 3878 log.go:181] (0x40006b0000) (3) Data frame sent\nI0113 08:27:43.716475 3878 log.go:181] (0x400003a420) Data frame received for 5\nI0113 08:27:43.716578 3878 log.go:181] (0x40003105a0) (5) Data frame handling\nI0113 08:27:43.716719 3878 log.go:181] (0x40003105a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.6.129:80/\nI0113 08:27:43.721576 3878 log.go:181] (0x400003a420) Data frame received for 3\nI0113 08:27:43.721690 3878 log.go:181] (0x40006b0000) (3) Data frame handling\nI0113 08:27:43.721809 3878 log.go:181] (0x40006b0000) (3) Data frame sent\nI0113 08:27:43.722117 3878 log.go:181] (0x400003a420) Data frame received for 3\nI0113 08:27:43.722248 3878 log.go:181] (0x40006b0000) (3) Data frame handling\nI0113 08:27:43.722371 3878 log.go:181] (0x400003a420) Data frame received for 5\nI0113 08:27:43.722605 3878 log.go:181] (0x40003105a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.6.129:80/\nI0113 08:27:43.722808 3878 log.go:181] (0x40006b0000) (3) Data frame sent\nI0113 08:27:43.722995 3878 log.go:181] (0x40003105a0) (5) Data frame sent\nI0113 08:27:43.729604 3878 log.go:181] (0x400003a420) Data frame received for 3\nI0113 08:27:43.729729 3878 log.go:181] (0x40006b0000) (3) Data frame handling\nI0113 08:27:43.729870 3878 log.go:181] (0x40006b0000) (3) Data frame sent\nI0113 08:27:43.730616 3878 log.go:181] (0x400003a420) Data frame received for 5\nI0113 08:27:43.730773 3878 log.go:181] (0x40003105a0) (5) Data frame handling\nI0113 08:27:43.731017 3878 log.go:181] (0x400003a420) Data frame received for 3\nI0113 08:27:43.731161 3878 log.go:181] (0x40006b0000) (3) Data frame handling\nI0113 08:27:43.733044 3878 log.go:181] (0x400003a420) Data frame received for 1\nI0113 08:27:43.733172 3878 log.go:181] (0x40000c3680) (1) Data frame handling\nI0113 08:27:43.733299 3878 log.go:181] (0x40000c3680) (1) Data frame sent\nI0113 08:27:43.734181 3878 log.go:181] (0x400003a420) (0x40000c3680) Stream removed, broadcasting: 1\nI0113 08:27:43.736732 3878 log.go:181] (0x400003a420) Go away received\nI0113 08:27:43.740429 3878 log.go:181] (0x400003a420) (0x40000c3680) Stream removed, broadcasting: 1\nI0113 08:27:43.741407 3878 log.go:181] (0x400003a420) (0x40006b0000) Stream removed, broadcasting: 3\nI0113 08:27:43.741721 3878 log.go:181] (0x400003a420) (0x40003105a0) Stream removed, broadcasting: 5\n" Jan 13 08:27:43.755: INFO: stdout: "\naffinity-clusterip-timeout-mfbwc\naffinity-clusterip-timeout-mfbwc\naffinity-clusterip-timeout-mfbwc\naffinity-clusterip-timeout-mfbwc\naffinity-clusterip-timeout-mfbwc\naffinity-clusterip-timeout-mfbwc\naffinity-clusterip-timeout-mfbwc\naffinity-clusterip-timeout-mfbwc\naffinity-clusterip-timeout-mfbwc\naffinity-clusterip-timeout-mfbwc\naffinity-clusterip-timeout-mfbwc\naffinity-clusterip-timeout-mfbwc\naffinity-clusterip-timeout-mfbwc\naffinity-clusterip-timeout-mfbwc\naffinity-clusterip-timeout-mfbwc\naffinity-clusterip-timeout-mfbwc" Jan 13 08:27:43.755: INFO: Received response from host: affinity-clusterip-timeout-mfbwc Jan 13 08:27:43.755: INFO: Received response from host: affinity-clusterip-timeout-mfbwc Jan 13 08:27:43.755: INFO: Received response from host: affinity-clusterip-timeout-mfbwc Jan 13 08:27:43.755: INFO: Received response from host: affinity-clusterip-timeout-mfbwc Jan 13 08:27:43.755: INFO: Received response from host: affinity-clusterip-timeout-mfbwc Jan 13 08:27:43.755: INFO: Received response from host: affinity-clusterip-timeout-mfbwc Jan 13 08:27:43.755: INFO: Received response from host: affinity-clusterip-timeout-mfbwc Jan 13 08:27:43.755: INFO: Received response from host: affinity-clusterip-timeout-mfbwc Jan 13 08:27:43.755: INFO: Received response from host: affinity-clusterip-timeout-mfbwc Jan 13 08:27:43.755: INFO: Received response from host: affinity-clusterip-timeout-mfbwc Jan 13 08:27:43.755: INFO: Received response from host: affinity-clusterip-timeout-mfbwc Jan 13 08:27:43.755: INFO: Received response from host: affinity-clusterip-timeout-mfbwc Jan 13 08:27:43.755: INFO: Received response from host: affinity-clusterip-timeout-mfbwc Jan 13 08:27:43.755: INFO: Received response from host: affinity-clusterip-timeout-mfbwc Jan 13 08:27:43.755: INFO: Received response from host: affinity-clusterip-timeout-mfbwc Jan 13 08:27:43.755: INFO: Received response from host: affinity-clusterip-timeout-mfbwc Jan 13 08:27:43.756: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-1339 exec execpod-affinitytd4mv -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.96.6.129:80/' Jan 13 08:27:45.397: INFO: stderr: "I0113 08:27:45.273826 3898 log.go:181] (0x4000d12000) (0x40007a0000) Create stream\nI0113 08:27:45.278213 3898 log.go:181] (0x4000d12000) (0x40007a0000) Stream added, broadcasting: 1\nI0113 08:27:45.292171 3898 log.go:181] (0x4000d12000) Reply frame received for 1\nI0113 08:27:45.292774 3898 log.go:181] (0x4000d12000) (0x4000712640) Create stream\nI0113 08:27:45.292924 3898 log.go:181] (0x4000d12000) (0x4000712640) Stream added, broadcasting: 3\nI0113 08:27:45.294482 3898 log.go:181] (0x4000d12000) Reply frame received for 3\nI0113 08:27:45.294870 3898 log.go:181] (0x4000d12000) (0x40007a00a0) Create stream\nI0113 08:27:45.294962 3898 log.go:181] (0x4000d12000) (0x40007a00a0) Stream added, broadcasting: 5\nI0113 08:27:45.296381 3898 log.go:181] (0x4000d12000) Reply frame received for 5\nI0113 08:27:45.375355 3898 log.go:181] (0x4000d12000) Data frame received for 5\nI0113 08:27:45.375660 3898 log.go:181] (0x40007a00a0) (5) Data frame handling\nI0113 08:27:45.376120 3898 log.go:181] (0x40007a00a0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.96.6.129:80/\nI0113 08:27:45.379988 3898 log.go:181] (0x4000d12000) Data frame received for 3\nI0113 08:27:45.380169 3898 log.go:181] (0x4000712640) (3) Data frame handling\nI0113 08:27:45.380334 3898 log.go:181] (0x4000712640) (3) Data frame sent\nI0113 08:27:45.380545 3898 log.go:181] (0x4000d12000) Data frame received for 3\nI0113 08:27:45.380678 3898 log.go:181] (0x4000712640) (3) Data frame handling\nI0113 08:27:45.381302 3898 log.go:181] (0x4000d12000) Data frame received for 5\nI0113 08:27:45.381409 3898 log.go:181] (0x40007a00a0) (5) Data frame handling\nI0113 08:27:45.382677 3898 log.go:181] (0x4000d12000) Data frame received for 1\nI0113 08:27:45.382793 3898 log.go:181] (0x40007a0000) (1) Data frame handling\nI0113 08:27:45.382903 3898 log.go:181] (0x40007a0000) (1) Data frame sent\nI0113 08:27:45.385048 3898 log.go:181] (0x4000d12000) (0x40007a0000) Stream removed, broadcasting: 1\nI0113 08:27:45.385847 3898 log.go:181] (0x4000d12000) Go away received\nI0113 08:27:45.389947 3898 log.go:181] (0x4000d12000) (0x40007a0000) Stream removed, broadcasting: 1\nI0113 08:27:45.390246 3898 log.go:181] (0x4000d12000) (0x4000712640) Stream removed, broadcasting: 3\nI0113 08:27:45.390430 3898 log.go:181] (0x4000d12000) (0x40007a00a0) Stream removed, broadcasting: 5\n" Jan 13 08:27:45.399: INFO: stdout: "affinity-clusterip-timeout-mfbwc" Jan 13 08:28:05.400: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-1339 exec execpod-affinitytd4mv -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.96.6.129:80/' Jan 13 08:28:07.125: INFO: stderr: "I0113 08:28:07.010560 3918 log.go:181] (0x40001bc420) (0x400098e8c0) Create stream\nI0113 08:28:07.017240 3918 log.go:181] (0x40001bc420) (0x400098e8c0) Stream added, broadcasting: 1\nI0113 08:28:07.027839 3918 log.go:181] (0x40001bc420) Reply frame received for 1\nI0113 08:28:07.028389 3918 log.go:181] (0x40001bc420) (0x4000508f00) Create stream\nI0113 08:28:07.028451 3918 log.go:181] (0x40001bc420) (0x4000508f00) Stream added, broadcasting: 3\nI0113 08:28:07.030205 3918 log.go:181] (0x40001bc420) Reply frame received for 3\nI0113 08:28:07.030583 3918 log.go:181] (0x40001bc420) (0x4000509cc0) Create stream\nI0113 08:28:07.030662 3918 log.go:181] (0x40001bc420) (0x4000509cc0) Stream added, broadcasting: 5\nI0113 08:28:07.031887 3918 log.go:181] (0x40001bc420) Reply frame received for 5\nI0113 08:28:07.101284 3918 log.go:181] (0x40001bc420) Data frame received for 5\nI0113 08:28:07.101555 3918 log.go:181] (0x4000509cc0) (5) Data frame handling\nI0113 08:28:07.102129 3918 log.go:181] (0x4000509cc0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.96.6.129:80/\nI0113 08:28:07.104556 3918 log.go:181] (0x40001bc420) Data frame received for 3\nI0113 08:28:07.104733 3918 log.go:181] (0x4000508f00) (3) Data frame handling\nI0113 08:28:07.105018 3918 log.go:181] (0x4000508f00) (3) Data frame sent\nI0113 08:28:07.105143 3918 log.go:181] (0x40001bc420) Data frame received for 3\nI0113 08:28:07.105235 3918 log.go:181] (0x4000508f00) (3) Data frame handling\nI0113 08:28:07.106029 3918 log.go:181] (0x40001bc420) Data frame received for 5\nI0113 08:28:07.106156 3918 log.go:181] (0x4000509cc0) (5) Data frame handling\nI0113 08:28:07.107376 3918 log.go:181] (0x40001bc420) Data frame received for 1\nI0113 08:28:07.107490 3918 log.go:181] (0x400098e8c0) (1) Data frame handling\nI0113 08:28:07.107603 3918 log.go:181] (0x400098e8c0) (1) Data frame sent\nI0113 08:28:07.109119 3918 log.go:181] (0x40001bc420) (0x400098e8c0) Stream removed, broadcasting: 1\nI0113 08:28:07.112322 3918 log.go:181] (0x40001bc420) Go away received\nI0113 08:28:07.116355 3918 log.go:181] (0x40001bc420) (0x400098e8c0) Stream removed, broadcasting: 1\nI0113 08:28:07.116739 3918 log.go:181] (0x40001bc420) (0x4000508f00) Stream removed, broadcasting: 3\nI0113 08:28:07.117126 3918 log.go:181] (0x40001bc420) (0x4000509cc0) Stream removed, broadcasting: 5\n" Jan 13 08:28:07.126: INFO: stdout: "affinity-clusterip-timeout-mfbwc" Jan 13 08:28:27.127: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-1339 exec execpod-affinitytd4mv -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.96.6.129:80/' Jan 13 08:28:28.633: INFO: stderr: "I0113 08:28:28.530898 3938 log.go:181] (0x4000b0c840) (0x4000ac0320) Create stream\nI0113 08:28:28.533743 3938 log.go:181] (0x4000b0c840) (0x4000ac0320) Stream added, broadcasting: 1\nI0113 08:28:28.546983 3938 log.go:181] (0x4000b0c840) Reply frame received for 1\nI0113 08:28:28.547784 3938 log.go:181] (0x4000b0c840) (0x40007925a0) Create stream\nI0113 08:28:28.547856 3938 log.go:181] (0x4000b0c840) (0x40007925a0) Stream added, broadcasting: 3\nI0113 08:28:28.549348 3938 log.go:181] (0x4000b0c840) Reply frame received for 3\nI0113 08:28:28.549644 3938 log.go:181] (0x4000b0c840) (0x4000928460) Create stream\nI0113 08:28:28.549741 3938 log.go:181] (0x4000b0c840) (0x4000928460) Stream added, broadcasting: 5\nI0113 08:28:28.550908 3938 log.go:181] (0x4000b0c840) Reply frame received for 5\nI0113 08:28:28.609606 3938 log.go:181] (0x4000b0c840) Data frame received for 5\nI0113 08:28:28.609812 3938 log.go:181] (0x4000928460) (5) Data frame handling\nI0113 08:28:28.610197 3938 log.go:181] (0x4000928460) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.96.6.129:80/\nI0113 08:28:28.614454 3938 log.go:181] (0x4000b0c840) Data frame received for 3\nI0113 08:28:28.614566 3938 log.go:181] (0x40007925a0) (3) Data frame handling\nI0113 08:28:28.614657 3938 log.go:181] (0x40007925a0) (3) Data frame sent\nI0113 08:28:28.614792 3938 log.go:181] (0x4000b0c840) Data frame received for 5\nI0113 08:28:28.614892 3938 log.go:181] (0x4000928460) (5) Data frame handling\nI0113 08:28:28.615289 3938 log.go:181] (0x4000b0c840) Data frame received for 3\nI0113 08:28:28.615443 3938 log.go:181] (0x40007925a0) (3) Data frame handling\nI0113 08:28:28.617019 3938 log.go:181] (0x4000b0c840) Data frame received for 1\nI0113 08:28:28.617168 3938 log.go:181] (0x4000ac0320) (1) Data frame handling\nI0113 08:28:28.617323 3938 log.go:181] (0x4000ac0320) (1) Data frame sent\nI0113 08:28:28.618353 3938 log.go:181] (0x4000b0c840) (0x4000ac0320) Stream removed, broadcasting: 1\nI0113 08:28:28.621047 3938 log.go:181] (0x4000b0c840) Go away received\nI0113 08:28:28.625066 3938 log.go:181] (0x4000b0c840) (0x4000ac0320) Stream removed, broadcasting: 1\nI0113 08:28:28.625497 3938 log.go:181] (0x4000b0c840) (0x40007925a0) Stream removed, broadcasting: 3\nI0113 08:28:28.625758 3938 log.go:181] (0x4000b0c840) (0x4000928460) Stream removed, broadcasting: 5\n" Jan 13 08:28:28.634: INFO: stdout: "affinity-clusterip-timeout-9wmdw" Jan 13 08:28:28.634: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-1339, will wait for the garbage collector to delete the pods Jan 13 08:28:29.002: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 183.453604ms Jan 13 08:28:29.603: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 600.850015ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:28:50.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1339" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:91.866 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":309,"completed":281,"skipped":5004,"failed":0} [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:28:50.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename certificates STEP: Waiting for a default service account to be provisioned in namespace [It] should support CSR API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: getting /apis STEP: getting /apis/certificates.k8s.io STEP: getting /apis/certificates.k8s.io/v1 STEP: creating STEP: getting STEP: listing STEP: watching Jan 13 08:28:54.584: INFO: starting watch STEP: patching STEP: updating Jan 13 08:28:54.599: INFO: waiting for watch events with expected annotations Jan 13 08:28:54.600: INFO: saw patched and updated annotations STEP: getting /approval STEP: patching /approval STEP: updating /approval STEP: getting /status STEP: patching /status STEP: updating /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:28:54.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "certificates-8436" for this suite. •{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":309,"completed":282,"skipped":5004,"failed":0} SSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:28:54.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-9574 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating stateful set ss in namespace statefulset-9574 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9574 Jan 13 08:28:54.937: INFO: Found 0 stateful pods, waiting for 1 Jan 13 08:29:04.946: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jan 13 08:29:04.953: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9574 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 13 08:29:06.589: INFO: stderr: "I0113 08:29:06.423079 3958 log.go:181] (0x400003a0b0) (0x40003437c0) Create stream\nI0113 08:29:06.426193 3958 log.go:181] (0x400003a0b0) (0x40003437c0) Stream added, broadcasting: 1\nI0113 08:29:06.437277 3958 log.go:181] (0x400003a0b0) Reply frame received for 1\nI0113 08:29:06.438006 3958 log.go:181] (0x400003a0b0) (0x4000c0c000) Create stream\nI0113 08:29:06.438079 3958 log.go:181] (0x400003a0b0) (0x4000c0c000) Stream added, broadcasting: 3\nI0113 08:29:06.439337 3958 log.go:181] (0x400003a0b0) Reply frame received for 3\nI0113 08:29:06.439665 3958 log.go:181] (0x400003a0b0) (0x40004325a0) Create stream\nI0113 08:29:06.439722 3958 log.go:181] (0x400003a0b0) (0x40004325a0) Stream added, broadcasting: 5\nI0113 08:29:06.440671 3958 log.go:181] (0x400003a0b0) Reply frame received for 5\nI0113 08:29:06.543719 3958 log.go:181] (0x400003a0b0) Data frame received for 5\nI0113 08:29:06.543902 3958 log.go:181] (0x40004325a0) (5) Data frame handling\nI0113 08:29:06.544265 3958 log.go:181] (0x40004325a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0113 08:29:06.569753 3958 log.go:181] (0x400003a0b0) Data frame received for 3\nI0113 08:29:06.569864 3958 log.go:181] (0x4000c0c000) (3) Data frame handling\nI0113 08:29:06.570292 3958 log.go:181] (0x400003a0b0) Data frame received for 5\nI0113 08:29:06.570420 3958 log.go:181] (0x40004325a0) (5) Data frame handling\nI0113 08:29:06.570571 3958 log.go:181] (0x4000c0c000) (3) Data frame sent\nI0113 08:29:06.570735 3958 log.go:181] (0x400003a0b0) Data frame received for 3\nI0113 08:29:06.570843 3958 log.go:181] (0x4000c0c000) (3) Data frame handling\nI0113 08:29:06.574045 3958 log.go:181] (0x400003a0b0) Data frame received for 1\nI0113 08:29:06.574141 3958 log.go:181] (0x40003437c0) (1) Data frame handling\nI0113 08:29:06.574284 3958 log.go:181] (0x40003437c0) (1) Data frame sent\nI0113 08:29:06.575167 3958 log.go:181] (0x400003a0b0) (0x40003437c0) Stream removed, broadcasting: 1\nI0113 08:29:06.577706 3958 log.go:181] (0x400003a0b0) Go away received\nI0113 08:29:06.580588 3958 log.go:181] (0x400003a0b0) (0x40003437c0) Stream removed, broadcasting: 1\nI0113 08:29:06.581381 3958 log.go:181] (0x400003a0b0) (0x4000c0c000) Stream removed, broadcasting: 3\nI0113 08:29:06.581872 3958 log.go:181] (0x400003a0b0) (0x40004325a0) Stream removed, broadcasting: 5\n" Jan 13 08:29:06.591: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 13 08:29:06.591: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 13 08:29:06.599: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jan 13 08:29:16.609: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 13 08:29:16.609: INFO: Waiting for statefulset status.replicas updated to 0 Jan 13 08:29:16.634: INFO: POD NODE PHASE GRACE CONDITIONS Jan 13 08:29:16.637: INFO: ss-0 leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:28:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:28:54 +0000 UTC }] Jan 13 08:29:16.637: INFO: Jan 13 08:29:16.637: INFO: StatefulSet ss has not reached scale 3, at 1 Jan 13 08:29:17.684: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.990140917s Jan 13 08:29:18.695: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.943391207s Jan 13 08:29:19.745: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.93230092s Jan 13 08:29:21.140: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.882745138s Jan 13 08:29:22.177: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.487905613s Jan 13 08:29:23.186: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.450696393s Jan 13 08:29:24.211: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.44166894s Jan 13 08:29:25.222: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.416525106s Jan 13 08:29:26.251: INFO: Verifying statefulset ss doesn't scale past 3 for another 405.200523ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9574 Jan 13 08:29:27.264: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9574 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 08:29:28.838: INFO: stderr: "I0113 08:29:28.704340 3978 log.go:181] (0x4000296000) (0x4000936280) Create stream\nI0113 08:29:28.709684 3978 log.go:181] (0x4000296000) (0x4000936280) Stream added, broadcasting: 1\nI0113 08:29:28.723470 3978 log.go:181] (0x4000296000) Reply frame received for 1\nI0113 08:29:28.724077 3978 log.go:181] (0x4000296000) (0x4000936320) Create stream\nI0113 08:29:28.724139 3978 log.go:181] (0x4000296000) (0x4000936320) Stream added, broadcasting: 3\nI0113 08:29:28.725487 3978 log.go:181] (0x4000296000) Reply frame received for 3\nI0113 08:29:28.725769 3978 log.go:181] (0x4000296000) (0x40009b7720) Create stream\nI0113 08:29:28.725837 3978 log.go:181] (0x4000296000) (0x40009b7720) Stream added, broadcasting: 5\nI0113 08:29:28.730677 3978 log.go:181] (0x4000296000) Reply frame received for 5\nI0113 08:29:28.816806 3978 log.go:181] (0x4000296000) Data frame received for 5\nI0113 08:29:28.817201 3978 log.go:181] (0x4000296000) Data frame received for 3\nI0113 08:29:28.817422 3978 log.go:181] (0x40009b7720) (5) Data frame handling\nI0113 08:29:28.817697 3978 log.go:181] (0x4000296000) Data frame received for 1\nI0113 08:29:28.818089 3978 log.go:181] (0x4000936280) (1) Data frame handling\nI0113 08:29:28.818433 3978 log.go:181] (0x4000936320) (3) Data frame handling\nI0113 08:29:28.819294 3978 log.go:181] (0x4000936320) (3) Data frame sent\nI0113 08:29:28.820258 3978 log.go:181] (0x40009b7720) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0113 08:29:28.820535 3978 log.go:181] (0x4000296000) Data frame received for 5\nI0113 08:29:28.820623 3978 log.go:181] (0x40009b7720) (5) Data frame handling\nI0113 08:29:28.820748 3978 log.go:181] (0x4000936280) (1) Data frame sent\nI0113 08:29:28.821014 3978 log.go:181] (0x4000296000) Data frame received for 3\nI0113 08:29:28.821163 3978 log.go:181] (0x4000936320) (3) Data frame handling\nI0113 08:29:28.822035 3978 log.go:181] (0x4000296000) (0x4000936280) Stream removed, broadcasting: 1\nI0113 08:29:28.825295 3978 log.go:181] (0x4000296000) Go away received\nI0113 08:29:28.829771 3978 log.go:181] (0x4000296000) (0x4000936280) Stream removed, broadcasting: 1\nI0113 08:29:28.830127 3978 log.go:181] (0x4000296000) (0x4000936320) Stream removed, broadcasting: 3\nI0113 08:29:28.830374 3978 log.go:181] (0x4000296000) (0x40009b7720) Stream removed, broadcasting: 5\n" Jan 13 08:29:28.840: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 13 08:29:28.840: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 13 08:29:28.840: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9574 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 08:29:30.453: INFO: stderr: "I0113 08:29:30.320303 3998 log.go:181] (0x4000d1c0b0) (0x4000554000) Create stream\nI0113 08:29:30.326222 3998 log.go:181] (0x4000d1c0b0) (0x4000554000) Stream added, broadcasting: 1\nI0113 08:29:30.339850 3998 log.go:181] (0x4000d1c0b0) Reply frame received for 1\nI0113 08:29:30.340548 3998 log.go:181] (0x4000d1c0b0) (0x4000655180) Create stream\nI0113 08:29:30.340622 3998 log.go:181] (0x4000d1c0b0) (0x4000655180) Stream added, broadcasting: 3\nI0113 08:29:30.342090 3998 log.go:181] (0x4000d1c0b0) Reply frame received for 3\nI0113 08:29:30.342357 3998 log.go:181] (0x4000d1c0b0) (0x40008c01e0) Create stream\nI0113 08:29:30.342425 3998 log.go:181] (0x4000d1c0b0) (0x40008c01e0) Stream added, broadcasting: 5\nI0113 08:29:30.343682 3998 log.go:181] (0x4000d1c0b0) Reply frame received for 5\nI0113 08:29:30.433073 3998 log.go:181] (0x4000d1c0b0) Data frame received for 5\nI0113 08:29:30.435069 3998 log.go:181] (0x4000d1c0b0) Data frame received for 3\nI0113 08:29:30.435469 3998 log.go:181] (0x4000d1c0b0) Data frame received for 1\nI0113 08:29:30.435776 3998 log.go:181] (0x40008c01e0) (5) Data frame handling\nI0113 08:29:30.436672 3998 log.go:181] (0x4000554000) (1) Data frame handling\nI0113 08:29:30.437685 3998 log.go:181] (0x4000655180) (3) Data frame handling\nI0113 08:29:30.437911 3998 log.go:181] (0x40008c01e0) (5) Data frame sent\nI0113 08:29:30.438293 3998 log.go:181] (0x4000655180) (3) Data frame sent\nI0113 08:29:30.438539 3998 log.go:181] (0x4000554000) (1) Data frame sent\nI0113 08:29:30.438798 3998 log.go:181] (0x4000d1c0b0) Data frame received for 5\nI0113 08:29:30.438891 3998 log.go:181] (0x40008c01e0) (5) Data frame handling\nI0113 08:29:30.439008 3998 log.go:181] (0x4000d1c0b0) Data frame received for 3\nI0113 08:29:30.439085 3998 log.go:181] (0x4000655180) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0113 08:29:30.440826 3998 log.go:181] (0x4000d1c0b0) (0x4000554000) Stream removed, broadcasting: 1\nI0113 08:29:30.441799 3998 log.go:181] (0x4000d1c0b0) Go away received\nI0113 08:29:30.445388 3998 log.go:181] (0x4000d1c0b0) (0x4000554000) Stream removed, broadcasting: 1\nI0113 08:29:30.445695 3998 log.go:181] (0x4000d1c0b0) (0x4000655180) Stream removed, broadcasting: 3\nI0113 08:29:30.445929 3998 log.go:181] (0x4000d1c0b0) (0x40008c01e0) Stream removed, broadcasting: 5\n" Jan 13 08:29:30.454: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 13 08:29:30.454: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 13 08:29:30.454: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9574 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 08:29:32.087: INFO: stderr: "I0113 08:29:31.975253 4018 log.go:181] (0x400015ad10) (0x4000552140) Create stream\nI0113 08:29:31.982604 4018 log.go:181] (0x400015ad10) (0x4000552140) Stream added, broadcasting: 1\nI0113 08:29:31.992886 4018 log.go:181] (0x400015ad10) Reply frame received for 1\nI0113 08:29:31.993473 4018 log.go:181] (0x400015ad10) (0x4000a7e000) Create stream\nI0113 08:29:31.993538 4018 log.go:181] (0x400015ad10) (0x4000a7e000) Stream added, broadcasting: 3\nI0113 08:29:31.995181 4018 log.go:181] (0x400015ad10) Reply frame received for 3\nI0113 08:29:31.995620 4018 log.go:181] (0x400015ad10) (0x4000b32a00) Create stream\nI0113 08:29:31.995716 4018 log.go:181] (0x400015ad10) (0x4000b32a00) Stream added, broadcasting: 5\nI0113 08:29:31.997583 4018 log.go:181] (0x400015ad10) Reply frame received for 5\nI0113 08:29:32.066639 4018 log.go:181] (0x400015ad10) Data frame received for 3\nI0113 08:29:32.067136 4018 log.go:181] (0x4000a7e000) (3) Data frame handling\nI0113 08:29:32.067440 4018 log.go:181] (0x400015ad10) Data frame received for 1\nI0113 08:29:32.067596 4018 log.go:181] (0x400015ad10) Data frame received for 5\nI0113 08:29:32.067745 4018 log.go:181] (0x4000b32a00) (5) Data frame handling\nI0113 08:29:32.067876 4018 log.go:181] (0x4000552140) (1) Data frame handling\nI0113 08:29:32.068726 4018 log.go:181] (0x4000b32a00) (5) Data frame sent\nI0113 08:29:32.068971 4018 log.go:181] (0x4000552140) (1) Data frame sent\nI0113 08:29:32.069317 4018 log.go:181] (0x4000a7e000) (3) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0113 08:29:32.070290 4018 log.go:181] (0x400015ad10) Data frame received for 5\nI0113 08:29:32.070375 4018 log.go:181] (0x4000b32a00) (5) Data frame handling\nI0113 08:29:32.072578 4018 log.go:181] (0x400015ad10) Data frame received for 3\nI0113 08:29:32.072820 4018 log.go:181] (0x400015ad10) (0x4000552140) Stream removed, broadcasting: 1\nI0113 08:29:32.073384 4018 log.go:181] (0x4000a7e000) (3) Data frame handling\nI0113 08:29:32.074094 4018 log.go:181] (0x400015ad10) Go away received\nI0113 08:29:32.078922 4018 log.go:181] (0x400015ad10) (0x4000552140) Stream removed, broadcasting: 1\nI0113 08:29:32.079231 4018 log.go:181] (0x400015ad10) (0x4000a7e000) Stream removed, broadcasting: 3\nI0113 08:29:32.079425 4018 log.go:181] (0x400015ad10) (0x4000b32a00) Stream removed, broadcasting: 5\n" Jan 13 08:29:32.088: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 13 08:29:32.088: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 13 08:29:32.096: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 13 08:29:32.096: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 13 08:29:32.097: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Jan 13 08:29:32.104: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9574 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 13 08:29:33.625: INFO: stderr: "I0113 08:29:33.514253 4038 log.go:181] (0x4000232370) (0x400029d540) Create stream\nI0113 08:29:33.516571 4038 log.go:181] (0x4000232370) (0x400029d540) Stream added, broadcasting: 1\nI0113 08:29:33.527060 4038 log.go:181] (0x4000232370) Reply frame received for 1\nI0113 08:29:33.528106 4038 log.go:181] (0x4000232370) (0x400023d900) Create stream\nI0113 08:29:33.528193 4038 log.go:181] (0x4000232370) (0x400023d900) Stream added, broadcasting: 3\nI0113 08:29:33.529752 4038 log.go:181] (0x4000232370) Reply frame received for 3\nI0113 08:29:33.529969 4038 log.go:181] (0x4000232370) (0x400029de00) Create stream\nI0113 08:29:33.530025 4038 log.go:181] (0x4000232370) (0x400029de00) Stream added, broadcasting: 5\nI0113 08:29:33.531177 4038 log.go:181] (0x4000232370) Reply frame received for 5\nI0113 08:29:33.605512 4038 log.go:181] (0x4000232370) Data frame received for 3\nI0113 08:29:33.606016 4038 log.go:181] (0x4000232370) Data frame received for 5\nI0113 08:29:33.606247 4038 log.go:181] (0x400029de00) (5) Data frame handling\nI0113 08:29:33.606366 4038 log.go:181] (0x4000232370) Data frame received for 1\nI0113 08:29:33.606495 4038 log.go:181] (0x400029d540) (1) Data frame handling\nI0113 08:29:33.606797 4038 log.go:181] (0x400023d900) (3) Data frame handling\nI0113 08:29:33.607717 4038 log.go:181] (0x400029d540) (1) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0113 08:29:33.608183 4038 log.go:181] (0x400029de00) (5) Data frame sent\nI0113 08:29:33.608701 4038 log.go:181] (0x400023d900) (3) Data frame sent\nI0113 08:29:33.608801 4038 log.go:181] (0x4000232370) Data frame received for 3\nI0113 08:29:33.608950 4038 log.go:181] (0x400023d900) (3) Data frame handling\nI0113 08:29:33.609915 4038 log.go:181] (0x4000232370) Data frame received for 5\nI0113 08:29:33.610052 4038 log.go:181] (0x400029de00) (5) Data frame handling\nI0113 08:29:33.611580 4038 log.go:181] (0x4000232370) (0x400029d540) Stream removed, broadcasting: 1\nI0113 08:29:33.613006 4038 log.go:181] (0x4000232370) Go away received\nI0113 08:29:33.616709 4038 log.go:181] (0x4000232370) (0x400029d540) Stream removed, broadcasting: 1\nI0113 08:29:33.617231 4038 log.go:181] (0x4000232370) (0x400023d900) Stream removed, broadcasting: 3\nI0113 08:29:33.617490 4038 log.go:181] (0x4000232370) (0x400029de00) Stream removed, broadcasting: 5\n" Jan 13 08:29:33.626: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 13 08:29:33.626: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 13 08:29:33.627: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9574 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 13 08:29:35.211: INFO: stderr: "I0113 08:29:35.058968 4058 log.go:181] (0x4000e20000) (0x4000b98140) Create stream\nI0113 08:29:35.062909 4058 log.go:181] (0x4000e20000) (0x4000b98140) Stream added, broadcasting: 1\nI0113 08:29:35.077312 4058 log.go:181] (0x4000e20000) Reply frame received for 1\nI0113 08:29:35.078287 4058 log.go:181] (0x4000e20000) (0x400048bc20) Create stream\nI0113 08:29:35.078380 4058 log.go:181] (0x4000e20000) (0x400048bc20) Stream added, broadcasting: 3\nI0113 08:29:35.079982 4058 log.go:181] (0x4000e20000) Reply frame received for 3\nI0113 08:29:35.080206 4058 log.go:181] (0x4000e20000) (0x400059e6e0) Create stream\nI0113 08:29:35.080266 4058 log.go:181] (0x4000e20000) (0x400059e6e0) Stream added, broadcasting: 5\nI0113 08:29:35.081676 4058 log.go:181] (0x4000e20000) Reply frame received for 5\nI0113 08:29:35.165932 4058 log.go:181] (0x4000e20000) Data frame received for 5\nI0113 08:29:35.166130 4058 log.go:181] (0x400059e6e0) (5) Data frame handling\nI0113 08:29:35.166465 4058 log.go:181] (0x400059e6e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0113 08:29:35.189141 4058 log.go:181] (0x4000e20000) Data frame received for 3\nI0113 08:29:35.189272 4058 log.go:181] (0x400048bc20) (3) Data frame handling\nI0113 08:29:35.189559 4058 log.go:181] (0x400048bc20) (3) Data frame sent\nI0113 08:29:35.189699 4058 log.go:181] (0x4000e20000) Data frame received for 3\nI0113 08:29:35.189807 4058 log.go:181] (0x400048bc20) (3) Data frame handling\nI0113 08:29:35.189988 4058 log.go:181] (0x4000e20000) Data frame received for 5\nI0113 08:29:35.190088 4058 log.go:181] (0x400059e6e0) (5) Data frame handling\nI0113 08:29:35.190978 4058 log.go:181] (0x4000e20000) Data frame received for 1\nI0113 08:29:35.191158 4058 log.go:181] (0x4000b98140) (1) Data frame handling\nI0113 08:29:35.191366 4058 log.go:181] (0x4000b98140) (1) Data frame sent\nI0113 08:29:35.193253 4058 log.go:181] (0x4000e20000) (0x4000b98140) Stream removed, broadcasting: 1\nI0113 08:29:35.197274 4058 log.go:181] (0x4000e20000) Go away received\nI0113 08:29:35.202296 4058 log.go:181] (0x4000e20000) (0x4000b98140) Stream removed, broadcasting: 1\nI0113 08:29:35.202838 4058 log.go:181] (0x4000e20000) (0x400048bc20) Stream removed, broadcasting: 3\nI0113 08:29:35.203201 4058 log.go:181] (0x4000e20000) (0x400059e6e0) Stream removed, broadcasting: 5\n" Jan 13 08:29:35.212: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 13 08:29:35.212: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 13 08:29:35.212: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9574 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 13 08:29:36.968: INFO: stderr: "I0113 08:29:36.825404 4078 log.go:181] (0x400003a6e0) (0x4000c98140) Create stream\nI0113 08:29:36.830162 4078 log.go:181] (0x400003a6e0) (0x4000c98140) Stream added, broadcasting: 1\nI0113 08:29:36.844433 4078 log.go:181] (0x400003a6e0) Reply frame received for 1\nI0113 08:29:36.845188 4078 log.go:181] (0x400003a6e0) (0x400031c640) Create stream\nI0113 08:29:36.845252 4078 log.go:181] (0x400003a6e0) (0x400031c640) Stream added, broadcasting: 3\nI0113 08:29:36.846487 4078 log.go:181] (0x400003a6e0) Reply frame received for 3\nI0113 08:29:36.846740 4078 log.go:181] (0x400003a6e0) (0x4000c981e0) Create stream\nI0113 08:29:36.846801 4078 log.go:181] (0x400003a6e0) (0x4000c981e0) Stream added, broadcasting: 5\nI0113 08:29:36.848086 4078 log.go:181] (0x400003a6e0) Reply frame received for 5\nI0113 08:29:36.913401 4078 log.go:181] (0x400003a6e0) Data frame received for 5\nI0113 08:29:36.913668 4078 log.go:181] (0x4000c981e0) (5) Data frame handling\nI0113 08:29:36.914081 4078 log.go:181] (0x4000c981e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0113 08:29:36.939170 4078 log.go:181] (0x400003a6e0) Data frame received for 3\nI0113 08:29:36.939467 4078 log.go:181] (0x400031c640) (3) Data frame handling\nI0113 08:29:36.939667 4078 log.go:181] (0x400003a6e0) Data frame received for 5\nI0113 08:29:36.939854 4078 log.go:181] (0x4000c981e0) (5) Data frame handling\nI0113 08:29:36.940069 4078 log.go:181] (0x400031c640) (3) Data frame sent\nI0113 08:29:36.940253 4078 log.go:181] (0x400003a6e0) Data frame received for 3\nI0113 08:29:36.940448 4078 log.go:181] (0x400031c640) (3) Data frame handling\nI0113 08:29:36.940648 4078 log.go:181] (0x400003a6e0) Data frame received for 1\nI0113 08:29:36.940802 4078 log.go:181] (0x4000c98140) (1) Data frame handling\nI0113 08:29:36.941135 4078 log.go:181] (0x4000c98140) (1) Data frame sent\nI0113 08:29:36.944646 4078 log.go:181] (0x400003a6e0) (0x4000c98140) Stream removed, broadcasting: 1\nI0113 08:29:36.947895 4078 log.go:181] (0x400003a6e0) Go away received\nI0113 08:29:36.960050 4078 log.go:181] (0x400003a6e0) (0x4000c98140) Stream removed, broadcasting: 1\nI0113 08:29:36.960465 4078 log.go:181] (0x400003a6e0) (0x400031c640) Stream removed, broadcasting: 3\nI0113 08:29:36.960699 4078 log.go:181] (0x400003a6e0) (0x4000c981e0) Stream removed, broadcasting: 5\n" Jan 13 08:29:36.969: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 13 08:29:36.969: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 13 08:29:36.969: INFO: Waiting for statefulset status.replicas updated to 0 Jan 13 08:29:36.975: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Jan 13 08:29:47.000: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 13 08:29:47.000: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 13 08:29:47.000: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 13 08:29:47.069: INFO: POD NODE PHASE GRACE CONDITIONS Jan 13 08:29:47.069: INFO: ss-0 leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:28:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:28:54 +0000 UTC }] Jan 13 08:29:47.070: INFO: ss-1 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:16 +0000 UTC }] Jan 13 08:29:47.071: INFO: ss-2 leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:16 +0000 UTC }] Jan 13 08:29:47.071: INFO: Jan 13 08:29:47.071: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 13 08:29:48.098: INFO: POD NODE PHASE GRACE CONDITIONS Jan 13 08:29:48.098: INFO: ss-0 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:28:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:28:54 +0000 UTC }] Jan 13 08:29:48.098: INFO: ss-1 leguer-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:16 +0000 UTC }] Jan 13 08:29:48.098: INFO: ss-2 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:16 +0000 UTC }] Jan 13 08:29:48.099: INFO: Jan 13 08:29:48.099: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 13 08:29:49.162: INFO: POD NODE PHASE GRACE CONDITIONS Jan 13 08:29:49.162: INFO: ss-0 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:28:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:28:54 +0000 UTC }] Jan 13 08:29:49.163: INFO: ss-1 leguer-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:16 +0000 UTC }] Jan 13 08:29:49.163: INFO: ss-2 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:16 +0000 UTC }] Jan 13 08:29:49.163: INFO: Jan 13 08:29:49.163: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 13 08:29:50.173: INFO: POD NODE PHASE GRACE CONDITIONS Jan 13 08:29:50.174: INFO: ss-0 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:28:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:28:54 +0000 UTC }] Jan 13 08:29:50.174: INFO: ss-1 leguer-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:16 +0000 UTC }] Jan 13 08:29:50.174: INFO: ss-2 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:16 +0000 UTC }] Jan 13 08:29:50.174: INFO: Jan 13 08:29:50.174: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 13 08:29:51.181: INFO: POD NODE PHASE GRACE CONDITIONS Jan 13 08:29:51.181: INFO: ss-0 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:28:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:28:54 +0000 UTC }] Jan 13 08:29:51.181: INFO: ss-1 leguer-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:16 +0000 UTC }] Jan 13 08:29:51.182: INFO: ss-2 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:16 +0000 UTC }] Jan 13 08:29:51.182: INFO: Jan 13 08:29:51.182: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 13 08:29:52.193: INFO: POD NODE PHASE GRACE CONDITIONS Jan 13 08:29:52.193: INFO: ss-0 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:28:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:28:54 +0000 UTC }] Jan 13 08:29:52.193: INFO: ss-1 leguer-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:16 +0000 UTC }] Jan 13 08:29:52.194: INFO: ss-2 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:16 +0000 UTC }] Jan 13 08:29:52.194: INFO: Jan 13 08:29:52.194: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 13 08:29:53.205: INFO: POD NODE PHASE GRACE CONDITIONS Jan 13 08:29:53.205: INFO: ss-0 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:28:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:28:54 +0000 UTC }] Jan 13 08:29:53.205: INFO: ss-1 leguer-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:16 +0000 UTC }] Jan 13 08:29:53.206: INFO: ss-2 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:16 +0000 UTC }] Jan 13 08:29:53.206: INFO: Jan 13 08:29:53.206: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 13 08:29:54.215: INFO: POD NODE PHASE GRACE CONDITIONS Jan 13 08:29:54.215: INFO: ss-0 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:28:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:28:54 +0000 UTC }] Jan 13 08:29:54.215: INFO: ss-1 leguer-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:16 +0000 UTC }] Jan 13 08:29:54.216: INFO: ss-2 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:16 +0000 UTC }] Jan 13 08:29:54.216: INFO: Jan 13 08:29:54.216: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 13 08:29:55.226: INFO: POD NODE PHASE GRACE CONDITIONS Jan 13 08:29:55.226: INFO: ss-0 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:28:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:28:54 +0000 UTC }] Jan 13 08:29:55.227: INFO: ss-1 leguer-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:16 +0000 UTC }] Jan 13 08:29:55.227: INFO: ss-2 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:16 +0000 UTC }] Jan 13 08:29:55.227: INFO: Jan 13 08:29:55.227: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 13 08:29:56.238: INFO: POD NODE PHASE GRACE CONDITIONS Jan 13 08:29:56.238: INFO: ss-0 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:28:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:33 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:28:54 +0000 UTC }] Jan 13 08:29:56.238: INFO: ss-1 leguer-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:16 +0000 UTC }] Jan 13 08:29:56.239: INFO: ss-2 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-01-13 08:29:16 +0000 UTC }] Jan 13 08:29:56.239: INFO: Jan 13 08:29:56.239: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9574 Jan 13 08:29:57.251: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9574 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 08:29:58.694: INFO: rc: 1 Jan 13 08:29:58.695: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9574 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Jan 13 08:30:08.696: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9574 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 08:30:10.345: INFO: rc: 1 Jan 13 08:30:10.345: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9574 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Jan 13 08:30:20.346: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9574 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 08:30:21.744: INFO: rc: 1 Jan 13 08:30:21.745: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9574 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Jan 13 08:30:31.745: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9574 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 08:30:33.161: INFO: rc: 1 Jan 13 08:30:33.161: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9574 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Jan 13 08:30:43.162: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9574 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 08:30:44.655: INFO: rc: 1 Jan 13 08:30:44.655: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9574 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Jan 13 08:30:54.656: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9574 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 08:30:55.942: INFO: rc: 1 Jan 13 08:30:55.943: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9574 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 13 08:31:05.944: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9574 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 08:31:07.369: INFO: rc: 1 Jan 13 08:31:07.369: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9574 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 13 08:31:17.370: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9574 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 08:31:18.735: INFO: rc: 1 Jan 13 08:31:18.735: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9574 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 13 08:31:28.736: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9574 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 08:31:30.049: INFO: rc: 1 Jan 13 08:31:30.050: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9574 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 13 08:31:40.050: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9574 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 08:31:41.364: INFO: rc: 1 Jan 13 08:31:41.364: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9574 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 13 08:31:51.365: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9574 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 08:31:52.660: INFO: rc: 1 Jan 13 08:31:52.661: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9574 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 13 08:32:02.662: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9574 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 08:32:03.962: INFO: rc: 1 Jan 13 08:32:03.963: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9574 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 13 08:32:13.963: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9574 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 08:32:15.275: INFO: rc: 1 Jan 13 08:32:15.275: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9574 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 13 08:32:25.276: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9574 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 08:32:26.642: INFO: rc: 1 Jan 13 08:32:26.642: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9574 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 13 08:32:36.643: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9574 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 08:32:37.951: INFO: rc: 1 Jan 13 08:32:37.951: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9574 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 13 08:32:47.952: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9574 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 08:32:49.214: INFO: rc: 1 Jan 13 08:32:49.214: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9574 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 13 08:32:59.215: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9574 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 08:33:00.579: INFO: rc: 1 Jan 13 08:33:00.579: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9574 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 13 08:33:10.580: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9574 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 08:33:11.958: INFO: rc: 1 Jan 13 08:33:11.958: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9574 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 13 08:33:21.959: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9574 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 08:33:23.273: INFO: rc: 1 Jan 13 08:33:23.273: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9574 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 13 08:33:33.274: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9574 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 08:33:34.672: INFO: rc: 1 Jan 13 08:33:34.673: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9574 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 13 08:33:44.673: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9574 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 08:33:46.074: INFO: rc: 1 Jan 13 08:33:46.075: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9574 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 13 08:33:56.075: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9574 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 08:33:57.434: INFO: rc: 1 Jan 13 08:33:57.434: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9574 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 13 08:34:07.435: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9574 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 08:34:08.733: INFO: rc: 1 Jan 13 08:34:08.733: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9574 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 13 08:34:18.734: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9574 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 08:34:20.103: INFO: rc: 1 Jan 13 08:34:20.104: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9574 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 13 08:34:30.105: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9574 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 08:34:31.436: INFO: rc: 1 Jan 13 08:34:31.436: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9574 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 13 08:34:41.437: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9574 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 08:34:42.854: INFO: rc: 1 Jan 13 08:34:42.854: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9574 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 13 08:34:52.855: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9574 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 08:34:54.363: INFO: rc: 1 Jan 13 08:34:54.363: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9574 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 13 08:35:04.364: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=statefulset-9574 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 08:35:05.640: INFO: rc: 1 Jan 13 08:35:05.640: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: Jan 13 08:35:05.640: INFO: Scaling statefulset ss to 0 Jan 13 08:35:05.654: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jan 13 08:35:05.661: INFO: Deleting all statefulset in ns statefulset-9574 Jan 13 08:35:05.683: INFO: Scaling statefulset ss to 0 Jan 13 08:35:05.697: INFO: Waiting for statefulset status.replicas updated to 0 Jan 13 08:35:05.700: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:35:05.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9574" for this suite. • [SLOW TEST:370.953 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":309,"completed":283,"skipped":5012,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:35:05.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name configmap-test-volume-5a7f4e7c-034b-4688-8314-31cb9b20e816 STEP: Creating a pod to test consume configMaps Jan 13 08:35:05.991: INFO: Waiting up to 5m0s for pod "pod-configmaps-4a27b577-a66e-41a2-829b-87867b155bfc" in namespace "configmap-5774" to be "Succeeded or Failed" Jan 13 08:35:06.005: INFO: Pod "pod-configmaps-4a27b577-a66e-41a2-829b-87867b155bfc": Phase="Pending", Reason="", readiness=false. Elapsed: 13.627885ms Jan 13 08:35:08.066: INFO: Pod "pod-configmaps-4a27b577-a66e-41a2-829b-87867b155bfc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075207154s Jan 13 08:35:10.073: INFO: Pod "pod-configmaps-4a27b577-a66e-41a2-829b-87867b155bfc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.082452974s STEP: Saw pod success Jan 13 08:35:10.074: INFO: Pod "pod-configmaps-4a27b577-a66e-41a2-829b-87867b155bfc" satisfied condition "Succeeded or Failed" Jan 13 08:35:10.078: INFO: Trying to get logs from node leguer-worker pod pod-configmaps-4a27b577-a66e-41a2-829b-87867b155bfc container agnhost-container: STEP: delete the pod Jan 13 08:35:10.117: INFO: Waiting for pod pod-configmaps-4a27b577-a66e-41a2-829b-87867b155bfc to disappear Jan 13 08:35:10.132: INFO: Pod pod-configmaps-4a27b577-a66e-41a2-829b-87867b155bfc no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:35:10.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5774" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":309,"completed":284,"skipped":5036,"failed":0} SSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:35:10.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating pod liveness-2a169d67-9e08-422b-86e7-ead8d5467284 in namespace container-probe-4896 Jan 13 08:35:14.591: INFO: Started pod liveness-2a169d67-9e08-422b-86e7-ead8d5467284 in namespace container-probe-4896 STEP: checking the pod's current state and verifying that restartCount is present Jan 13 08:35:14.597: INFO: Initial restart count of pod liveness-2a169d67-9e08-422b-86e7-ead8d5467284 is 0 Jan 13 08:35:30.665: INFO: Restart count of pod container-probe-4896/liveness-2a169d67-9e08-422b-86e7-ead8d5467284 is now 1 (16.067762053s elapsed) Jan 13 08:35:50.745: INFO: Restart count of pod container-probe-4896/liveness-2a169d67-9e08-422b-86e7-ead8d5467284 is now 2 (36.147538093s elapsed) Jan 13 08:36:10.815: INFO: Restart count of pod container-probe-4896/liveness-2a169d67-9e08-422b-86e7-ead8d5467284 is now 3 (56.217833777s elapsed) Jan 13 08:36:30.890: INFO: Restart count of pod container-probe-4896/liveness-2a169d67-9e08-422b-86e7-ead8d5467284 is now 4 (1m16.293280316s elapsed) Jan 13 08:37:31.208: INFO: Restart count of pod container-probe-4896/liveness-2a169d67-9e08-422b-86e7-ead8d5467284 is now 5 (2m16.611096654s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:37:31.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4896" for this suite. • [SLOW TEST:141.099 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":309,"completed":285,"skipped":5042,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:37:31.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jan 13 08:37:41.469: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 13 08:37:41.492: INFO: Pod pod-with-prestop-http-hook still exists Jan 13 08:37:43.492: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 13 08:37:43.500: INFO: Pod pod-with-prestop-http-hook still exists Jan 13 08:37:45.492: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 13 08:37:45.500: INFO: Pod pod-with-prestop-http-hook still exists Jan 13 08:37:47.492: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 13 08:37:47.502: INFO: Pod pod-with-prestop-http-hook still exists Jan 13 08:37:49.492: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 13 08:37:49.499: INFO: Pod pod-with-prestop-http-hook still exists Jan 13 08:37:51.492: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 13 08:37:51.500: INFO: Pod pod-with-prestop-http-hook still exists Jan 13 08:37:53.492: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 13 08:37:53.511: INFO: Pod pod-with-prestop-http-hook still exists Jan 13 08:37:55.492: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 13 08:37:55.500: INFO: Pod pod-with-prestop-http-hook still exists Jan 13 08:37:57.492: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 13 08:37:57.502: INFO: Pod pod-with-prestop-http-hook still exists Jan 13 08:37:59.492: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 13 08:37:59.501: INFO: Pod pod-with-prestop-http-hook still exists Jan 13 08:38:01.492: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 13 08:38:01.501: INFO: Pod pod-with-prestop-http-hook still exists Jan 13 08:38:03.492: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 13 08:38:03.499: INFO: Pod pod-with-prestop-http-hook still exists Jan 13 08:38:05.492: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 13 08:38:05.502: INFO: Pod pod-with-prestop-http-hook still exists Jan 13 08:38:07.492: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 13 08:38:07.502: INFO: Pod pod-with-prestop-http-hook still exists Jan 13 08:38:09.492: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 13 08:38:09.499: INFO: Pod pod-with-prestop-http-hook still exists Jan 13 08:38:11.492: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 13 08:38:11.501: INFO: Pod pod-with-prestop-http-hook still exists Jan 13 08:38:13.492: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 13 08:38:13.502: INFO: Pod pod-with-prestop-http-hook still exists Jan 13 08:38:15.492: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 13 08:38:15.503: INFO: Pod pod-with-prestop-http-hook still exists Jan 13 08:38:17.492: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 13 08:38:17.500: INFO: Pod pod-with-prestop-http-hook still exists Jan 13 08:38:19.492: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 13 08:38:19.501: INFO: Pod pod-with-prestop-http-hook still exists Jan 13 08:38:21.492: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 13 08:38:21.501: INFO: Pod pod-with-prestop-http-hook still exists Jan 13 08:38:23.492: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 13 08:38:23.504: INFO: Pod pod-with-prestop-http-hook still exists Jan 13 08:38:25.492: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 13 08:38:25.502: INFO: Pod pod-with-prestop-http-hook still exists Jan 13 08:38:27.492: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 13 08:38:27.502: INFO: Pod pod-with-prestop-http-hook still exists Jan 13 08:38:29.492: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 13 08:38:29.502: INFO: Pod pod-with-prestop-http-hook still exists Jan 13 08:38:31.492: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 13 08:38:31.498: INFO: Pod pod-with-prestop-http-hook still exists Jan 13 08:38:33.492: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 13 08:38:33.499: INFO: Pod pod-with-prestop-http-hook still exists Jan 13 08:38:35.492: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 13 08:38:35.501: INFO: Pod pod-with-prestop-http-hook still exists Jan 13 08:38:37.492: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 13 08:38:37.500: INFO: Pod pod-with-prestop-http-hook still exists Jan 13 08:38:39.492: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 13 08:38:39.502: INFO: Pod pod-with-prestop-http-hook still exists Jan 13 08:38:41.492: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 13 08:38:41.499: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:38:41.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1674" for this suite. • [SLOW TEST:70.289 seconds] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:43 should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":309,"completed":286,"skipped":5052,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Events should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:38:41.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Create set of events Jan 13 08:38:41.721: INFO: created test-event-1 Jan 13 08:38:41.728: INFO: created test-event-2 Jan 13 08:38:41.738: INFO: created test-event-3 STEP: get a list of Events with a label in the current namespace STEP: delete collection of events Jan 13 08:38:41.758: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity Jan 13 08:38:41.784: INFO: requesting list of events to confirm quantity [AfterEach] [sig-api-machinery] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:38:41.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-3871" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should delete a collection of events [Conformance]","total":309,"completed":287,"skipped":5072,"failed":0} ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:38:41.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 13 08:38:45.522: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 13 08:38:47.761: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746123925, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746123925, loc:(*time.Location)(0x7089440)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746123925, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746123925, loc:(*time.Location)(0x7089440)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 13 08:38:49.780: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746123925, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746123925, loc:(*time.Location)(0x7089440)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746123925, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746123925, loc:(*time.Location)(0x7089440)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 13 08:38:52.799: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:38:53.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8670" for this suite. STEP: Destroying namespace "webhook-8670-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:11.729 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":309,"completed":288,"skipped":5072,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:38:53.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 13 08:38:57.267: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 13 08:38:59.341: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746123937, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746123937, loc:(*time.Location)(0x7089440)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746123937, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746123937, loc:(*time.Location)(0x7089440)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 13 08:39:01.377: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746123937, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746123937, loc:(*time.Location)(0x7089440)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746123937, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746123937, loc:(*time.Location)(0x7089440)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 13 08:39:04.398: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 08:39:04.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6328-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:39:05.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4095" for this suite. STEP: Destroying namespace "webhook-4095-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:12.312 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":309,"completed":289,"skipped":5118,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:39:05.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name projected-configmap-test-volume-2244059f-5bb4-4a40-a202-a184cdeb1720 STEP: Creating a pod to test consume configMaps Jan 13 08:39:06.035: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-56fd397d-ea96-406f-a2a8-ed9a49b21be4" in namespace "projected-6527" to be "Succeeded or Failed" Jan 13 08:39:06.051: INFO: Pod "pod-projected-configmaps-56fd397d-ea96-406f-a2a8-ed9a49b21be4": Phase="Pending", Reason="", readiness=false. Elapsed: 16.188073ms Jan 13 08:39:08.060: INFO: Pod "pod-projected-configmaps-56fd397d-ea96-406f-a2a8-ed9a49b21be4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024773633s Jan 13 08:39:10.067: INFO: Pod "pod-projected-configmaps-56fd397d-ea96-406f-a2a8-ed9a49b21be4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032554905s STEP: Saw pod success Jan 13 08:39:10.068: INFO: Pod "pod-projected-configmaps-56fd397d-ea96-406f-a2a8-ed9a49b21be4" satisfied condition "Succeeded or Failed" Jan 13 08:39:10.074: INFO: Trying to get logs from node leguer-worker2 pod pod-projected-configmaps-56fd397d-ea96-406f-a2a8-ed9a49b21be4 container projected-configmap-volume-test: STEP: delete the pod Jan 13 08:39:10.173: INFO: Waiting for pod pod-projected-configmaps-56fd397d-ea96-406f-a2a8-ed9a49b21be4 to disappear Jan 13 08:39:10.296: INFO: Pod pod-projected-configmaps-56fd397d-ea96-406f-a2a8-ed9a49b21be4 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:39:10.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6527" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":309,"completed":290,"skipped":5136,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:39:10.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Performing setup for networking test in namespace pod-network-test-354 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 13 08:39:10.468: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jan 13 08:39:10.582: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 13 08:39:12.668: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 13 08:39:14.589: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 13 08:39:16.590: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 13 08:39:18.589: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 13 08:39:20.597: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 13 08:39:22.590: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 13 08:39:24.589: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 13 08:39:26.591: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 13 08:39:28.589: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 13 08:39:30.588: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 13 08:39:32.615: INFO: The status of Pod netserver-0 is Running (Ready = true) Jan 13 08:39:32.623: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jan 13 08:39:36.652: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Jan 13 08:39:36.652: INFO: Breadth first check of 10.244.2.125 on host 172.18.0.13... Jan 13 08:39:36.656: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.205:9080/dial?request=hostname&protocol=http&host=10.244.2.125&port=8080&tries=1'] Namespace:pod-network-test-354 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 08:39:36.656: INFO: >>> kubeConfig: /root/.kube/config I0113 08:39:36.719573 10 log.go:181] (0x4000722a50) (0x4001dde5a0) Create stream I0113 08:39:36.719777 10 log.go:181] (0x4000722a50) (0x4001dde5a0) Stream added, broadcasting: 1 I0113 08:39:36.723758 10 log.go:181] (0x4000722a50) Reply frame received for 1 I0113 08:39:36.723903 10 log.go:181] (0x4000722a50) (0x4001dde640) Create stream I0113 08:39:36.723991 10 log.go:181] (0x4000722a50) (0x4001dde640) Stream added, broadcasting: 3 I0113 08:39:36.726045 10 log.go:181] (0x4000722a50) Reply frame received for 3 I0113 08:39:36.726524 10 log.go:181] (0x4000722a50) (0x4000f037c0) Create stream I0113 08:39:36.726659 10 log.go:181] (0x4000722a50) (0x4000f037c0) Stream added, broadcasting: 5 I0113 08:39:36.728792 10 log.go:181] (0x4000722a50) Reply frame received for 5 I0113 08:39:36.802246 10 log.go:181] (0x4000722a50) Data frame received for 3 I0113 08:39:36.802461 10 log.go:181] (0x4001dde640) (3) Data frame handling I0113 08:39:36.802609 10 log.go:181] (0x4001dde640) (3) Data frame sent I0113 08:39:36.803085 10 log.go:181] (0x4000722a50) Data frame received for 5 I0113 08:39:36.803402 10 log.go:181] (0x4000f037c0) (5) Data frame handling I0113 08:39:36.803680 10 log.go:181] (0x4000722a50) Data frame received for 3 I0113 08:39:36.803782 10 log.go:181] (0x4001dde640) (3) Data frame handling I0113 08:39:36.809227 10 log.go:181] (0x4000722a50) Data frame received for 1 I0113 08:39:36.809365 10 log.go:181] (0x4001dde5a0) (1) Data frame handling I0113 08:39:36.809505 10 log.go:181] (0x4001dde5a0) (1) Data frame sent I0113 08:39:36.809657 10 log.go:181] (0x4000722a50) (0x4001dde5a0) Stream removed, broadcasting: 1 I0113 08:39:36.809808 10 log.go:181] (0x4000722a50) Go away received I0113 08:39:36.810407 10 log.go:181] (0x4000722a50) (0x4001dde5a0) Stream removed, broadcasting: 1 I0113 08:39:36.810588 10 log.go:181] (0x4000722a50) (0x4001dde640) Stream removed, broadcasting: 3 I0113 08:39:36.810756 10 log.go:181] (0x4000722a50) (0x4000f037c0) Stream removed, broadcasting: 5 Jan 13 08:39:36.811: INFO: Waiting for responses: map[] Jan 13 08:39:36.811: INFO: reached 10.244.2.125 after 0/1 tries Jan 13 08:39:36.811: INFO: Breadth first check of 10.244.1.204 on host 172.18.0.12... Jan 13 08:39:36.817: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.205:9080/dial?request=hostname&protocol=http&host=10.244.1.204&port=8080&tries=1'] Namespace:pod-network-test-354 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 08:39:36.817: INFO: >>> kubeConfig: /root/.kube/config I0113 08:39:36.882957 10 log.go:181] (0x4000818790) (0x4000f03f40) Create stream I0113 08:39:36.883124 10 log.go:181] (0x4000818790) (0x4000f03f40) Stream added, broadcasting: 1 I0113 08:39:36.886604 10 log.go:181] (0x4000818790) Reply frame received for 1 I0113 08:39:36.886792 10 log.go:181] (0x4000818790) (0x4001dde6e0) Create stream I0113 08:39:36.886866 10 log.go:181] (0x4000818790) (0x4001dde6e0) Stream added, broadcasting: 3 I0113 08:39:36.888113 10 log.go:181] (0x4000818790) Reply frame received for 3 I0113 08:39:36.888250 10 log.go:181] (0x4000818790) (0x40027d41e0) Create stream I0113 08:39:36.888316 10 log.go:181] (0x4000818790) (0x40027d41e0) Stream added, broadcasting: 5 I0113 08:39:36.889614 10 log.go:181] (0x4000818790) Reply frame received for 5 I0113 08:39:36.958944 10 log.go:181] (0x4000818790) Data frame received for 3 I0113 08:39:36.959264 10 log.go:181] (0x4001dde6e0) (3) Data frame handling I0113 08:39:36.959474 10 log.go:181] (0x4000818790) Data frame received for 5 I0113 08:39:36.959620 10 log.go:181] (0x40027d41e0) (5) Data frame handling I0113 08:39:36.959743 10 log.go:181] (0x4001dde6e0) (3) Data frame sent I0113 08:39:36.959879 10 log.go:181] (0x4000818790) Data frame received for 3 I0113 08:39:36.959997 10 log.go:181] (0x4001dde6e0) (3) Data frame handling I0113 08:39:36.960700 10 log.go:181] (0x4000818790) Data frame received for 1 I0113 08:39:36.960970 10 log.go:181] (0x4000f03f40) (1) Data frame handling I0113 08:39:36.961182 10 log.go:181] (0x4000f03f40) (1) Data frame sent I0113 08:39:36.961335 10 log.go:181] (0x4000818790) (0x4000f03f40) Stream removed, broadcasting: 1 I0113 08:39:36.961503 10 log.go:181] (0x4000818790) Go away received I0113 08:39:36.961893 10 log.go:181] (0x4000818790) (0x4000f03f40) Stream removed, broadcasting: 1 I0113 08:39:36.962008 10 log.go:181] (0x4000818790) (0x4001dde6e0) Stream removed, broadcasting: 3 I0113 08:39:36.962111 10 log.go:181] (0x4000818790) (0x40027d41e0) Stream removed, broadcasting: 5 Jan 13 08:39:36.962: INFO: Waiting for responses: map[] Jan 13 08:39:36.962: INFO: reached 10.244.1.204 after 0/1 tries Jan 13 08:39:36.962: INFO: Going to retry 0 out of 2 pods.... [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:39:36.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-354" for this suite. • [SLOW TEST:26.664 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:27 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:30 should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":309,"completed":291,"skipped":5160,"failed":0} SSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:39:36.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: getting the auto-created API token STEP: reading a file in the container Jan 13 08:39:41.646: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7287 pod-service-account-15b346d9-3cf5-4529-b90d-e25552b198c8 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Jan 13 08:39:47.853: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7287 pod-service-account-15b346d9-3cf5-4529-b90d-e25552b198c8 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Jan 13 08:39:49.403: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7287 pod-service-account-15b346d9-3cf5-4529-b90d-e25552b198c8 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:39:51.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-7287" for this suite. • [SLOW TEST:14.062 seconds] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":309,"completed":292,"skipped":5169,"failed":0} SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:39:51.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name projected-configmap-test-volume-c6cfa7ed-2ed7-4cd7-b8f4-579d73b3dd40 STEP: Creating a pod to test consume configMaps Jan 13 08:39:51.182: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ce71d17b-0a82-49c0-b6b4-e608f1027500" in namespace "projected-6272" to be "Succeeded or Failed" Jan 13 08:39:51.204: INFO: Pod "pod-projected-configmaps-ce71d17b-0a82-49c0-b6b4-e608f1027500": Phase="Pending", Reason="", readiness=false. Elapsed: 22.369947ms Jan 13 08:39:53.212: INFO: Pod "pod-projected-configmaps-ce71d17b-0a82-49c0-b6b4-e608f1027500": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030665922s Jan 13 08:39:55.231: INFO: Pod "pod-projected-configmaps-ce71d17b-0a82-49c0-b6b4-e608f1027500": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049114847s STEP: Saw pod success Jan 13 08:39:55.231: INFO: Pod "pod-projected-configmaps-ce71d17b-0a82-49c0-b6b4-e608f1027500" satisfied condition "Succeeded or Failed" Jan 13 08:39:55.239: INFO: Trying to get logs from node leguer-worker pod pod-projected-configmaps-ce71d17b-0a82-49c0-b6b4-e608f1027500 container agnhost-container: STEP: delete the pod Jan 13 08:39:55.288: INFO: Waiting for pod pod-projected-configmaps-ce71d17b-0a82-49c0-b6b4-e608f1027500 to disappear Jan 13 08:39:55.404: INFO: Pod pod-projected-configmaps-ce71d17b-0a82-49c0-b6b4-e608f1027500 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:39:55.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6272" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":293,"skipped":5172,"failed":0} SSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:39:55.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should get a host IP [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating pod Jan 13 08:39:59.588: INFO: Pod pod-hostip-719e4c4c-24a1-487c-9d77-aba761a8e199 has hostIP: 172.18.0.13 [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:39:59.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-38" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":309,"completed":294,"skipped":5176,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:39:59.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating replication controller my-hostname-basic-59492b8a-63a5-42bf-b4a1-857556128f19 Jan 13 08:39:59.835: INFO: Pod name my-hostname-basic-59492b8a-63a5-42bf-b4a1-857556128f19: Found 0 pods out of 1 Jan 13 08:40:04.843: INFO: Pod name my-hostname-basic-59492b8a-63a5-42bf-b4a1-857556128f19: Found 1 pods out of 1 Jan 13 08:40:04.843: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-59492b8a-63a5-42bf-b4a1-857556128f19" are running Jan 13 08:40:04.849: INFO: Pod "my-hostname-basic-59492b8a-63a5-42bf-b4a1-857556128f19-858gr" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-01-13 08:39:59 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-01-13 08:40:02 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-01-13 08:40:02 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-01-13 08:39:59 +0000 UTC Reason: Message:}]) Jan 13 08:40:04.852: INFO: Trying to dial the pod Jan 13 08:40:09.868: INFO: Controller my-hostname-basic-59492b8a-63a5-42bf-b4a1-857556128f19: Got expected result from replica 1 [my-hostname-basic-59492b8a-63a5-42bf-b4a1-857556128f19-858gr]: "my-hostname-basic-59492b8a-63a5-42bf-b4a1-857556128f19-858gr", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:40:09.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5617" for this suite. • [SLOW TEST:10.283 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":309,"completed":295,"skipped":5187,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:40:09.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 08:40:12.053: INFO: Deleting pod "var-expansion-9bc94791-836c-456a-ac39-d834e5a2db16" in namespace "var-expansion-3455" Jan 13 08:40:12.069: INFO: Wait up to 5m0s for pod "var-expansion-9bc94791-836c-456a-ac39-d834e5a2db16" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:40:52.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3455" for this suite. • [SLOW TEST:42.217 seconds] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":309,"completed":296,"skipped":5206,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:40:52.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test emptydir 0777 on tmpfs Jan 13 08:40:52.208: INFO: Waiting up to 5m0s for pod "pod-76c199ec-81b4-4233-92e4-7d61ed6ff50d" in namespace "emptydir-669" to be "Succeeded or Failed" Jan 13 08:40:52.225: INFO: Pod "pod-76c199ec-81b4-4233-92e4-7d61ed6ff50d": Phase="Pending", Reason="", readiness=false. Elapsed: 16.900774ms Jan 13 08:40:54.234: INFO: Pod "pod-76c199ec-81b4-4233-92e4-7d61ed6ff50d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026153106s Jan 13 08:40:56.242: INFO: Pod "pod-76c199ec-81b4-4233-92e4-7d61ed6ff50d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03333521s STEP: Saw pod success Jan 13 08:40:56.242: INFO: Pod "pod-76c199ec-81b4-4233-92e4-7d61ed6ff50d" satisfied condition "Succeeded or Failed" Jan 13 08:40:56.246: INFO: Trying to get logs from node leguer-worker pod pod-76c199ec-81b4-4233-92e4-7d61ed6ff50d container test-container: STEP: delete the pod Jan 13 08:40:56.507: INFO: Waiting for pod pod-76c199ec-81b4-4233-92e4-7d61ed6ff50d to disappear Jan 13 08:40:56.511: INFO: Pod pod-76c199ec-81b4-4233-92e4-7d61ed6ff50d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:40:56.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-669" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":297,"skipped":5228,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:40:56.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating all guestbook components Jan 13 08:40:56.629: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend Jan 13 08:40:56.629: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-5963 create -f -' Jan 13 08:40:59.679: INFO: stderr: "" Jan 13 08:40:59.679: INFO: stdout: "service/agnhost-replica created\n" Jan 13 08:40:59.680: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend Jan 13 08:40:59.681: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-5963 create -f -' Jan 13 08:41:02.646: INFO: stderr: "" Jan 13 08:41:02.646: INFO: stdout: "service/agnhost-primary created\n" Jan 13 08:41:02.648: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jan 13 08:41:02.648: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-5963 create -f -' Jan 13 08:41:05.250: INFO: stderr: "" Jan 13 08:41:05.250: INFO: stdout: "service/frontend created\n" Jan 13 08:41:05.252: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: k8s.gcr.io/e2e-test-images/agnhost:2.21 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Jan 13 08:41:05.252: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-5963 create -f -' Jan 13 08:41:08.547: INFO: stderr: "" Jan 13 08:41:08.547: INFO: stdout: "deployment.apps/frontend created\n" Jan 13 08:41:08.549: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: k8s.gcr.io/e2e-test-images/agnhost:2.21 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jan 13 08:41:08.549: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-5963 create -f -' Jan 13 08:41:13.159: INFO: stderr: "" Jan 13 08:41:13.159: INFO: stdout: "deployment.apps/agnhost-primary created\n" Jan 13 08:41:13.161: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: k8s.gcr.io/e2e-test-images/agnhost:2.21 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jan 13 08:41:13.162: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-5963 create -f -' Jan 13 08:41:15.404: INFO: stderr: "" Jan 13 08:41:15.404: INFO: stdout: "deployment.apps/agnhost-replica created\n" STEP: validating guestbook app Jan 13 08:41:15.405: INFO: Waiting for all frontend pods to be Running. Jan 13 08:41:15.456: INFO: Waiting for frontend to serve content. Jan 13 08:41:16.996: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: Jan 13 08:41:22.021: INFO: Trying to add a new entry to the guestbook. Jan 13 08:41:22.034: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Jan 13 08:41:22.043: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-5963 delete --grace-period=0 --force -f -' Jan 13 08:41:23.440: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 13 08:41:23.440: INFO: stdout: "service \"agnhost-replica\" force deleted\n" STEP: using delete to clean up resources Jan 13 08:41:23.441: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-5963 delete --grace-period=0 --force -f -' Jan 13 08:41:24.931: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 13 08:41:24.931: INFO: stdout: "service \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Jan 13 08:41:24.933: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-5963 delete --grace-period=0 --force -f -' Jan 13 08:41:26.437: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 13 08:41:26.437: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Jan 13 08:41:26.439: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-5963 delete --grace-period=0 --force -f -' Jan 13 08:41:27.764: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 13 08:41:27.765: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Jan 13 08:41:27.766: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-5963 delete --grace-period=0 --force -f -' Jan 13 08:41:29.095: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 13 08:41:29.095: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Jan 13 08:41:29.097: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=kubectl-5963 delete --grace-period=0 --force -f -' Jan 13 08:41:30.587: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 13 08:41:30.588: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:41:30.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5963" for this suite. • [SLOW TEST:34.154 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:342 should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":309,"completed":298,"skipped":5241,"failed":0} SS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:41:30.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test override arguments Jan 13 08:41:30.809: INFO: Waiting up to 5m0s for pod "client-containers-b44df672-8045-4029-9c4f-eecfaddcfb51" in namespace "containers-8714" to be "Succeeded or Failed" Jan 13 08:41:31.059: INFO: Pod "client-containers-b44df672-8045-4029-9c4f-eecfaddcfb51": Phase="Pending", Reason="", readiness=false. Elapsed: 249.939119ms Jan 13 08:41:33.075: INFO: Pod "client-containers-b44df672-8045-4029-9c4f-eecfaddcfb51": Phase="Pending", Reason="", readiness=false. Elapsed: 2.26628429s Jan 13 08:41:35.083: INFO: Pod "client-containers-b44df672-8045-4029-9c4f-eecfaddcfb51": Phase="Pending", Reason="", readiness=false. Elapsed: 4.274550667s Jan 13 08:41:37.091: INFO: Pod "client-containers-b44df672-8045-4029-9c4f-eecfaddcfb51": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.282051369s STEP: Saw pod success Jan 13 08:41:37.091: INFO: Pod "client-containers-b44df672-8045-4029-9c4f-eecfaddcfb51" satisfied condition "Succeeded or Failed" Jan 13 08:41:37.096: INFO: Trying to get logs from node leguer-worker pod client-containers-b44df672-8045-4029-9c4f-eecfaddcfb51 container agnhost-container: STEP: delete the pod Jan 13 08:41:37.134: INFO: Waiting for pod client-containers-b44df672-8045-4029-9c4f-eecfaddcfb51 to disappear Jan 13 08:41:37.146: INFO: Pod client-containers-b44df672-8045-4029-9c4f-eecfaddcfb51 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:41:37.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8714" for this suite. • [SLOW TEST:6.480 seconds] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":309,"completed":299,"skipped":5243,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:41:37.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:41:44.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5789" for this suite. • [SLOW TEST:7.149 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":309,"completed":300,"skipped":5256,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:41:44.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name projected-configmap-test-volume-map-ded6c44a-5fe7-4a31-9e19-86b35a4124d1 STEP: Creating a pod to test consume configMaps Jan 13 08:41:44.463: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2ffff161-dbb4-49eb-bb74-bb46c86d9e60" in namespace "projected-7189" to be "Succeeded or Failed" Jan 13 08:41:44.509: INFO: Pod "pod-projected-configmaps-2ffff161-dbb4-49eb-bb74-bb46c86d9e60": Phase="Pending", Reason="", readiness=false. Elapsed: 46.038207ms Jan 13 08:41:46.519: INFO: Pod "pod-projected-configmaps-2ffff161-dbb4-49eb-bb74-bb46c86d9e60": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055453947s Jan 13 08:41:48.529: INFO: Pod "pod-projected-configmaps-2ffff161-dbb4-49eb-bb74-bb46c86d9e60": Phase="Running", Reason="", readiness=true. Elapsed: 4.065978825s Jan 13 08:41:50.538: INFO: Pod "pod-projected-configmaps-2ffff161-dbb4-49eb-bb74-bb46c86d9e60": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.074594435s STEP: Saw pod success Jan 13 08:41:50.538: INFO: Pod "pod-projected-configmaps-2ffff161-dbb4-49eb-bb74-bb46c86d9e60" satisfied condition "Succeeded or Failed" Jan 13 08:41:50.543: INFO: Trying to get logs from node leguer-worker pod pod-projected-configmaps-2ffff161-dbb4-49eb-bb74-bb46c86d9e60 container agnhost-container: STEP: delete the pod Jan 13 08:41:50.584: INFO: Waiting for pod pod-projected-configmaps-2ffff161-dbb4-49eb-bb74-bb46c86d9e60 to disappear Jan 13 08:41:50.595: INFO: Pod pod-projected-configmaps-2ffff161-dbb4-49eb-bb74-bb46c86d9e60 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:41:50.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7189" for this suite. • [SLOW TEST:6.296 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":309,"completed":301,"skipped":5285,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:41:50.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Jan 13 08:41:50.750: INFO: Waiting up to 5m0s for pod "downwardapi-volume-aecefb39-fce6-4550-969b-0a8582d86516" in namespace "projected-9693" to be "Succeeded or Failed" Jan 13 08:41:50.764: INFO: Pod "downwardapi-volume-aecefb39-fce6-4550-969b-0a8582d86516": Phase="Pending", Reason="", readiness=false. Elapsed: 14.200083ms Jan 13 08:41:53.030: INFO: Pod "downwardapi-volume-aecefb39-fce6-4550-969b-0a8582d86516": Phase="Pending", Reason="", readiness=false. Elapsed: 2.279778086s Jan 13 08:41:55.038: INFO: Pod "downwardapi-volume-aecefb39-fce6-4550-969b-0a8582d86516": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.287792735s STEP: Saw pod success Jan 13 08:41:55.038: INFO: Pod "downwardapi-volume-aecefb39-fce6-4550-969b-0a8582d86516" satisfied condition "Succeeded or Failed" Jan 13 08:41:55.043: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-aecefb39-fce6-4550-969b-0a8582d86516 container client-container: STEP: delete the pod Jan 13 08:41:55.083: INFO: Waiting for pod downwardapi-volume-aecefb39-fce6-4550-969b-0a8582d86516 to disappear Jan 13 08:41:55.098: INFO: Pod downwardapi-volume-aecefb39-fce6-4550-969b-0a8582d86516 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:41:55.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9693" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":309,"completed":302,"skipped":5286,"failed":0} ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:41:55.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating configMap with name configmap-test-volume-bd7a645d-aae9-4257-9702-8ca7565b4ac3 STEP: Creating a pod to test consume configMaps Jan 13 08:41:55.223: INFO: Waiting up to 5m0s for pod "pod-configmaps-395590df-5004-4fd4-81af-d68257ca1887" in namespace "configmap-7424" to be "Succeeded or Failed" Jan 13 08:41:55.286: INFO: Pod "pod-configmaps-395590df-5004-4fd4-81af-d68257ca1887": Phase="Pending", Reason="", readiness=false. Elapsed: 62.456405ms Jan 13 08:41:57.293: INFO: Pod "pod-configmaps-395590df-5004-4fd4-81af-d68257ca1887": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06977646s Jan 13 08:41:59.300: INFO: Pod "pod-configmaps-395590df-5004-4fd4-81af-d68257ca1887": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.076920835s STEP: Saw pod success Jan 13 08:41:59.301: INFO: Pod "pod-configmaps-395590df-5004-4fd4-81af-d68257ca1887" satisfied condition "Succeeded or Failed" Jan 13 08:41:59.306: INFO: Trying to get logs from node leguer-worker2 pod pod-configmaps-395590df-5004-4fd4-81af-d68257ca1887 container agnhost-container: STEP: delete the pod Jan 13 08:41:59.350: INFO: Waiting for pod pod-configmaps-395590df-5004-4fd4-81af-d68257ca1887 to disappear Jan 13 08:41:59.365: INFO: Pod pod-configmaps-395590df-5004-4fd4-81af-d68257ca1887 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:41:59.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7424" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":309,"completed":303,"skipped":5286,"failed":0} SS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:41:59.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: creating a service externalname-service with the type=ExternalName in namespace services-7489 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-7489 I0113 08:41:59.903879 10 runners.go:190] Created replication controller with name: externalname-service, namespace: services-7489, replica count: 2 I0113 08:42:02.955662 10 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0113 08:42:05.956413 10 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 13 08:42:05.956: INFO: Creating new exec pod Jan 13 08:42:10.986: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-7489 exec execpodc4lks -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Jan 13 08:42:12.574: INFO: stderr: "I0113 08:42:12.441054 4967 log.go:181] (0x400053ec60) (0x4000154460) Create stream\nI0113 08:42:12.444275 4967 log.go:181] (0x400053ec60) (0x4000154460) Stream added, broadcasting: 1\nI0113 08:42:12.456495 4967 log.go:181] (0x400053ec60) Reply frame received for 1\nI0113 08:42:12.457397 4967 log.go:181] (0x400053ec60) (0x4000638000) Create stream\nI0113 08:42:12.457487 4967 log.go:181] (0x400053ec60) (0x4000638000) Stream added, broadcasting: 3\nI0113 08:42:12.458960 4967 log.go:181] (0x400053ec60) Reply frame received for 3\nI0113 08:42:12.459287 4967 log.go:181] (0x400053ec60) (0x4000154500) Create stream\nI0113 08:42:12.459358 4967 log.go:181] (0x400053ec60) (0x4000154500) Stream added, broadcasting: 5\nI0113 08:42:12.460808 4967 log.go:181] (0x400053ec60) Reply frame received for 5\nI0113 08:42:12.551518 4967 log.go:181] (0x400053ec60) Data frame received for 5\nI0113 08:42:12.552117 4967 log.go:181] (0x4000154500) (5) Data frame handling\nI0113 08:42:12.552435 4967 log.go:181] (0x400053ec60) Data frame received for 3\nI0113 08:42:12.552568 4967 log.go:181] (0x4000638000) (3) Data frame handling\nI0113 08:42:12.553828 4967 log.go:181] (0x4000154500) (5) Data frame sent\nI0113 08:42:12.553989 4967 log.go:181] (0x400053ec60) Data frame received for 1\nI0113 08:42:12.554127 4967 log.go:181] (0x4000154460) (1) Data frame handling\nI0113 08:42:12.554287 4967 log.go:181] (0x4000154460) (1) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0113 08:42:12.556025 4967 log.go:181] (0x400053ec60) Data frame received for 5\nI0113 08:42:12.556149 4967 log.go:181] (0x4000154500) (5) Data frame handling\nI0113 08:42:12.558565 4967 log.go:181] (0x400053ec60) (0x4000154460) Stream removed, broadcasting: 1\nI0113 08:42:12.559339 4967 log.go:181] (0x4000154500) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0113 08:42:12.560742 4967 log.go:181] (0x400053ec60) Data frame received for 5\nI0113 08:42:12.561024 4967 log.go:181] (0x4000154500) (5) Data frame handling\nI0113 08:42:12.561421 4967 log.go:181] (0x400053ec60) Go away received\nI0113 08:42:12.566145 4967 log.go:181] (0x400053ec60) (0x4000154460) Stream removed, broadcasting: 1\nI0113 08:42:12.566499 4967 log.go:181] (0x400053ec60) (0x4000638000) Stream removed, broadcasting: 3\nI0113 08:42:12.566729 4967 log.go:181] (0x400053ec60) (0x4000154500) Stream removed, broadcasting: 5\n" Jan 13 08:42:12.575: INFO: stdout: "" Jan 13 08:42:12.581: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34747 --kubeconfig=/root/.kube/config --namespace=services-7489 exec execpodc4lks -- /bin/sh -x -c nc -zv -t -w 2 10.96.73.138 80' Jan 13 08:42:14.269: INFO: stderr: "I0113 08:42:14.124301 4987 log.go:181] (0x4000f00000) (0x4000495a40) Create stream\nI0113 08:42:14.130296 4987 log.go:181] (0x4000f00000) (0x4000495a40) Stream added, broadcasting: 1\nI0113 08:42:14.144525 4987 log.go:181] (0x4000f00000) Reply frame received for 1\nI0113 08:42:14.145922 4987 log.go:181] (0x4000f00000) (0x400016ca00) Create stream\nI0113 08:42:14.146049 4987 log.go:181] (0x4000f00000) (0x400016ca00) Stream added, broadcasting: 3\nI0113 08:42:14.147952 4987 log.go:181] (0x4000f00000) Reply frame received for 3\nI0113 08:42:14.148464 4987 log.go:181] (0x4000f00000) (0x400016d040) Create stream\nI0113 08:42:14.148586 4987 log.go:181] (0x4000f00000) (0x400016d040) Stream added, broadcasting: 5\nI0113 08:42:14.150813 4987 log.go:181] (0x4000f00000) Reply frame received for 5\nI0113 08:42:14.249468 4987 log.go:181] (0x4000f00000) Data frame received for 5\nI0113 08:42:14.249761 4987 log.go:181] (0x400016d040) (5) Data frame handling\nI0113 08:42:14.249993 4987 log.go:181] (0x4000f00000) Data frame received for 3\nI0113 08:42:14.250206 4987 log.go:181] (0x400016ca00) (3) Data frame handling\nI0113 08:42:14.250468 4987 log.go:181] (0x4000f00000) Data frame received for 1\nI0113 08:42:14.250694 4987 log.go:181] (0x4000495a40) (1) Data frame handling\nI0113 08:42:14.250975 4987 log.go:181] (0x400016d040) (5) Data frame sent\nI0113 08:42:14.251248 4987 log.go:181] (0x4000495a40) (1) Data frame sent\n+ nc -zv -t -w 2 10.96.73.138 80\nConnection to 10.96.73.138 80 port [tcp/http] succeeded!\nI0113 08:42:14.251754 4987 log.go:181] (0x4000f00000) Data frame received for 5\nI0113 08:42:14.251937 4987 log.go:181] (0x400016d040) (5) Data frame handling\nI0113 08:42:14.254484 4987 log.go:181] (0x4000f00000) (0x4000495a40) Stream removed, broadcasting: 1\nI0113 08:42:14.257419 4987 log.go:181] (0x4000f00000) Go away received\nI0113 08:42:14.260998 4987 log.go:181] (0x4000f00000) (0x4000495a40) Stream removed, broadcasting: 1\nI0113 08:42:14.261374 4987 log.go:181] (0x4000f00000) (0x400016ca00) Stream removed, broadcasting: 3\nI0113 08:42:14.261640 4987 log.go:181] (0x4000f00000) (0x400016d040) Stream removed, broadcasting: 5\n" Jan 13 08:42:14.269: INFO: stdout: "" Jan 13 08:42:14.270: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:42:14.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7489" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 • [SLOW TEST:14.948 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":309,"completed":304,"skipped":5288,"failed":0} SSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:42:14.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating pod busybox-46962904-2fcb-462c-ad15-ecbb69ddd1f4 in namespace container-probe-6120 Jan 13 08:42:18.462: INFO: Started pod busybox-46962904-2fcb-462c-ad15-ecbb69ddd1f4 in namespace container-probe-6120 STEP: checking the pod's current state and verifying that restartCount is present Jan 13 08:42:18.467: INFO: Initial restart count of pod busybox-46962904-2fcb-462c-ad15-ecbb69ddd1f4 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:46:19.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6120" for this suite. • [SLOW TEST:245.457 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":309,"completed":305,"skipped":5294,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:46:19.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward api env vars Jan 13 08:46:19.951: INFO: Waiting up to 5m0s for pod "downward-api-7642048d-53e9-4dab-a6b2-6baa49987e03" in namespace "downward-api-8555" to be "Succeeded or Failed" Jan 13 08:46:19.989: INFO: Pod "downward-api-7642048d-53e9-4dab-a6b2-6baa49987e03": Phase="Pending", Reason="", readiness=false. Elapsed: 37.610241ms Jan 13 08:46:22.125: INFO: Pod "downward-api-7642048d-53e9-4dab-a6b2-6baa49987e03": Phase="Pending", Reason="", readiness=false. Elapsed: 2.174321485s Jan 13 08:46:24.133: INFO: Pod "downward-api-7642048d-53e9-4dab-a6b2-6baa49987e03": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.182105725s STEP: Saw pod success Jan 13 08:46:24.133: INFO: Pod "downward-api-7642048d-53e9-4dab-a6b2-6baa49987e03" satisfied condition "Succeeded or Failed" Jan 13 08:46:24.140: INFO: Trying to get logs from node leguer-worker2 pod downward-api-7642048d-53e9-4dab-a6b2-6baa49987e03 container dapi-container: STEP: delete the pod Jan 13 08:46:24.325: INFO: Waiting for pod downward-api-7642048d-53e9-4dab-a6b2-6baa49987e03 to disappear Jan 13 08:46:24.332: INFO: Pod downward-api-7642048d-53e9-4dab-a6b2-6baa49987e03 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:46:24.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8555" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":309,"completed":306,"skipped":5307,"failed":0} ------------------------------ [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:46:24.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename runtimeclass STEP: Waiting for a default service account to be provisioned in namespace [It] should support RuntimeClasses API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: getting /apis STEP: getting /apis/node.k8s.io STEP: getting /apis/node.k8s.io/v1 STEP: creating STEP: watching Jan 13 08:46:24.528: INFO: starting watch STEP: getting STEP: listing STEP: patching STEP: updating Jan 13 08:46:24.596: INFO: waiting for watch events with expected annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:46:24.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "runtimeclass-9766" for this suite. •{"msg":"PASSED [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]","total":309,"completed":307,"skipped":5307,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:46:24.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Creating a pod to test downward API volume plugin Jan 13 08:46:24.815: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c92c93f6-318c-4bfd-90c4-335a287a31c3" in namespace "downward-api-4201" to be "Succeeded or Failed" Jan 13 08:46:24.849: INFO: Pod "downwardapi-volume-c92c93f6-318c-4bfd-90c4-335a287a31c3": Phase="Pending", Reason="", readiness=false. Elapsed: 33.297833ms Jan 13 08:46:26.856: INFO: Pod "downwardapi-volume-c92c93f6-318c-4bfd-90c4-335a287a31c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040974765s Jan 13 08:46:28.864: INFO: Pod "downwardapi-volume-c92c93f6-318c-4bfd-90c4-335a287a31c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048718105s STEP: Saw pod success Jan 13 08:46:28.865: INFO: Pod "downwardapi-volume-c92c93f6-318c-4bfd-90c4-335a287a31c3" satisfied condition "Succeeded or Failed" Jan 13 08:46:28.870: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-c92c93f6-318c-4bfd-90c4-335a287a31c3 container client-container: STEP: delete the pod Jan 13 08:46:28.934: INFO: Waiting for pod downwardapi-volume-c92c93f6-318c-4bfd-90c4-335a287a31c3 to disappear Jan 13 08:46:28.981: INFO: Pod downwardapi-volume-c92c93f6-318c-4bfd-90c4-335a287a31c3 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:46:28.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4201" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":309,"completed":308,"skipped":5332,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 13 08:46:29.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 13 08:46:30.374: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created Jan 13 08:46:32.393: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746124390, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746124390, loc:(*time.Location)(0x7089440)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746124390, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746124390, loc:(*time.Location)(0x7089440)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 13 08:46:34.401: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746124390, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746124390, loc:(*time.Location)(0x7089440)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63746124390, loc:(*time.Location)(0x7089440)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63746124390, loc:(*time.Location)(0x7089440)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 13 08:46:37.432: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 08:46:37.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6064" for this suite. STEP: Destroying namespace "webhook-6064-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:9.577 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":309,"completed":309,"skipped":5358,"failed":0} Jan 13 08:46:38.580: INFO: Running AfterSuite actions on all nodes Jan 13 08:46:38.581: INFO: Running AfterSuite actions on node 1 Jan 13 08:46:38.582: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":309,"completed":309,"skipped":5358,"failed":0} Ran 309 of 5667 Specs in 8893.165 seconds SUCCESS! -- 309 Passed | 0 Failed | 0 Pending | 5358 Skipped PASS